DXT compression algorithm

Post here about scripting and programming for HaloPC (audio, network, ai, etc.)
Post Reply
User avatar
Altimit01




Connoisseur Snitch! Literarian 500

Posts: 947
Joined: Sun Jun 04, 2006 12:10 pm

DXT compression algorithm

Post by Altimit01 »

Ok, so I've been looking into converting images into the DXT format (dxt1 mainly but it applies to all of them). For those who do know, DXT uses texels of 4x4 pixels where each pixel is defined as one of 4 colors. Those 4 colors are from a palette for just that texel and are really just two colors with the other two being calculated as 1/3 and 2/3 along the gradient of the first two. So here's the general idea I had for converting a standard images texel to the compressed dxt form.

For each color channel:
Calculate the average and standard deviation for the 16 colors.
Remove outliers (more than 3 standard deviations), recalculate, repeat.
Take the extremes of the sanitized set and use those as the two extreme values for that channel for that palette.
Calculate the 1/3 and 2/3 value along that scale.

Combine the 4 sets for each of the 3 colors into 4 color values.
Calculate square of distance (cheaper) between each of the 4 palette colors and the target color in a 3D space using r,g and b as the axis.
(D = (r_palette - r_target)^2 + (g_palette - g_target)^2 + (b_palette - b_target)^2)
Lowest D value is used to match the target color to one of the 4 palette colors.
Put all of that info into DXT texel format.

So first off, do you guys think this would produce a compressed image that was close visually or would it end up completely destroying the image? Is this too computationally intensive? Do you guys have any way (without relying on libraries) to do a linear approximation of a 3D data set? Other suggestions, or comments?

Edit: one thing I'm really not sure about is with the linearization, I'm not sure how to best pick the end points. Any thoughts on that?

Edit: well for this particular linearization, I'll probably set my end points as being 1/8 and 7/8 along the line formed by the two extremes to improve resolution a bit.

Edit: instead of taking the extremes of the normalized data I'm going to try to incorporate PCA to determine the principle axis. Basically the eigenvector with the highest eigenvalue of the covariance matrix if I understand the technique correctly. That vector should represent the axis along which there is the most variation. Still need to figure out a good endpoint finding system though.
Image
Download Eschaton: Halomods | Filefront | Mediafire
Post Reply