Reversing a blur - image processing

BlitzMax Forums/BlitzMax Programming/Reversing a blur - image processing

ImaginaryHuman(Posted 2010) [#1]
Hey. I'm trying to do two things, a) take an image and apply a horizontal blur to it, then b) undo the exact same blur to result in the original image. You might ask why I don't just refer to the original image to get the result, but let's say I might not have access to it and may only have the blurred image. I will also have knowledge of the original `convolution matrix` at the time of wanting to reverse the blur.

I can understand and have implemented a simple horizontal blur, by a simple one-row convolution filter. But now I need to `reverse` the effects of it and am stuck. I have been looking through the web at `deconvolution` but it's littered with difficult-to-grasp math solutions, fourier transforms and all kinds of stuff. Since I KNOW what the convolution matrix was that produced the blur, isn't there an `inverse` convolution which would reverse the effects of it? And if so, what would that look like?

e.g my convolution filter might sample a sequence of pixels by multiplying them by... 1, 2, 3, 4, 5, 6, 7, 8, 7, 6, 5, 4, 3, 2, 1, divided by the total of 64. How would I reverse this? If I can result in exactly the original image that would be great, but I understand some precision may be lost - but as long as it's `close to` the origianal that'd be great. Something to do with dividing?


Gabriel(Posted 2010) [#2]
Assuming that your convolution matrix is even invertable, it would require an enormous amount of memory to store all the data needed to make a perfect inverse. Because each pixel is affected by numerous other pixels, all of which are themselves affected by numerous other pixels, etc, etc.. you need to analyze the image as a whole, or at least in very large chunks. So a perfect inverse is extremely difficult in practice, so we'll leave that aside.

The difficult-to-grasp math solutions you refer to are probably the middle-of-the road solution. They're close to perfect, but they involve a lot of hard maths. They would do. I can't help simplify that, as I don't understand it myself, but you're probably not going to find a very good solution which is simple.

Now, is there a really simple solution? Yes, but it's not very accurate. If I remember rightly, it goes something like this.

Total up the values in your convolution grid. That's the centre value for your new inverse covolution matrix.

Now all the other values are the values in the old convolution grid multiplied by minus 1.

I think that should work, but bear in mind that the results won't be great. It'll be like running one of those crappy Photoshop Sharpen filters on your image.


xlsior(Posted 2010) [#3]
The problem with reversing a blur, is that you lost a lot of information during the blur process in the first place...

Very basic example:

You have a blurred pixel that's now 50% grey.

Did the original area used to have a white pixels next to a black ones? Did it used to be various shades of grey? Were the comprised of 'oppsite' colors? (e.g. magenta + green is grey as well)

Without significant information about the original, you're piling one guess on top of another guess upon another one...
Best case scenario, you'll introduce a LOT of random noise into the picture, and you still won't end up with any significant additional detail compared to the original image. Like Gabriel mentioned, it'll be like applying a photoshop sharpening filter.

CSI-style "image enhancement" is a plot device, not reality. :-?


ImaginaryHuman(Posted 2010) [#4]
Thanks for the reply Gabriel. I tried the sharpen filter, with the total in the middle and all other values *-1, dividing by the total or by the total minus all the 1's. ... either way, not particularly great results as you said. Theres a few sites showing in more detail why this doesn't produce good results.

I don't really need to inverse `any` convolve, just this very specific one. I thought maybe that would help with a simpler solution. All that the blur has done is basically emulate an alphablended drawing of the image several times at an offset, divided by the number of instances. So I thought maybe there is an easy way to extract the contributions of each.

ie if the current pixel basically spreads itself out across several other pixels by multiplying those pixels by a given value, can't I go backwards by some kind of division by those values or something relating to them? Maybe 1/value?

[edit] xlsior, yea, I can see what you mean. There are some things out there, like a `wiener filter` which seems to get very good results reversing a blur, but understanding it enough to implement it in code is pretty hard.

I tried an unsharp mask, it's not too bad, but not great.

[edit2] On second thought, I can see that the gray pixel could be the result of all kinds of original colors, and I wouldn't be able to retrieve the originals, EXCEPT THAT I know exactly what the process was that caused the grey pixel to be created... I know what the convolution was. So doesn't this mean there should be a way to reverse it?


ImaginaryHuman(Posted 2010) [#5]
I was looking at this:

http://www.mathworks.com/products/image/demos.html?file=/products/demos/shipping/images/ipexwiener.html

What I want to do is basically what his first example shows... horizontal motion blur followed by a filter to remove it.

And this basically describes the possibility of an inverse:
http://www.cs.cf.ac.uk/Dave/Vision_lecture/node19.html

ie

"One major reason that Fourier transforms are so important in image processing is the convolution theorem which states that
If f(x) and g(x) are two functions with Fourier transforms F(u) and G(u), then the Fourier transform of the convolution f(x)*g(x) is simply the product of the Fourier transforms of the two functions, F(u) G(u).
Thus in principle we can undo a convolution. e.g. to compensate for a less than ideal image capture system:
Take the Fourier transform of the imperfect image,
Take the Fourier transform of the function describing the effect of the system,
Divide the former by the latter to obtain the Fourier transform of the ideal image.
Inverse Fourier transform to recover the ideal image.
This process is sometimes referred to as deconvolution."


xlsior(Posted 2010) [#6]
What I want to do is basically what his first example shows... horizontal motion blur followed by a filter to remove it.


Heh.


The trick there is that they used a greyscale image -- that would be fairly straightforward (e.g. pixel 3 is 50% of the value of pixel 6, and 25% of the value of pixel 5, and 10% of the value of pixel 4, etc.) and you can walk through the entire image.
In greyscale, you throw most of the uncertainty out of the window: It's trivially easy to calculate a percentage of the luminosity of a certain pixel to apply to another one, but it's next to impossible to do the same with a color image reliably.

Again, is that grey pixel the end result of black and white? two other greys? magenta and green? orange and blue? red and cyan? Any fractial mixture of any of the above? The answer to that makes a HUGE difference when trying to unblur a color image, but is irrelevant for a greyscale image.

Although I suppose that does mean that you CAN create an unblurred black-and-white image out of a blurred color image if you know the exact algorithm that was applied to it in the first place: convert the blurred image from color to proper black-and-white first, and then reverse the math. :-?
Note that to create a proper black-and-white version of the image, you'll have to consider the relative luminosity of red, green and blue -- your eyes are more sensitive to light in certain wavelengths than at others, so you can't simply average the three color channels together and call it good.
the proper mixture to obtain the luminosity of a color: red: 0.299, green: 0.587, blue: 0.114

I have a function to convert full color to proper greyscale in the code archives, here: http://www.blitzbasic.com/codearcs/codearcs.php?code=2089


ImaginaryHuman(Posted 2010) [#7]
Hey. Interesting.

I can see what you mean about it being better for grayscale but I'm fairly sure they're using the Wiener filter to undo blur on any image, colored or otherwise. If I were to convert the image to grayscale I would lose all the color, so how would I get the color back?

I understand about the human eye wavelengths, etc... just that if I convert it that way, I have to be able to convert back to non-human-eye grayscale otherwise the end result after deblur won't be the same, right?


ImaginaryHuman(Posted 2010) [#8]
Hey, there isn't much difference between working on a grayscale image and working on each individual channel of R, G, B separately. So surely I could just do one channel at a time and treat each as `grayscale`? I know the overall balance of color isn't the same, but.


Floyd(Posted 2010) [#9]
Theoretically there seems to be no problem. But there are two practical issues.

First, the inverse is not as easy as you like. The first transformation looks like one equation. But it is really a system of equations, one for each row. Each equation is just the previous one "shifted" one position. Inverting this transformation is equivalent to inverting a large matrix.

Assuming you get this far then another problem arises. If all you saved was the resulting image then you have only the approximate result of the convolution. The actual numbers were of the form (integer/64). This gets rounded to an integer for the new color values. Thus you begin with slightly inaccurate numbers for doing the inverse operation.


ImaginaryHuman(Posted 2010) [#10]
Hey Floyd. I don't quite understand the part about separate equations for each row? You mentioned about inverting a large matrix... how is that done?

I do follow the part about a loss of data. Originally the image bytes would be converted to Doubles, then the blur would be performed, so that the end result of blurring would be as accurate as possible. This would then be converted back to bytes... thus losing up to 1/256th of the accuracy, right?

That may not be a problem, I can work with the error later. The hard part for me is how to reverse the blur at all. Let's just assume I am okay with it not being 100% accurate.. but the closer the better.


ImaginaryHuman(Posted 2010) [#11]
Still researching..


Floyd(Posted 2010) [#12]
I don't quite understand the part about separate equations for each row?


Take a simple example with weights 1,2,1. We use these, then divide by 4 when the smoke clears.

Thus for each pixel in a row we multiply by 2 then add the left and right neighbors. At the endpoints we can "wrap around" to the other end. Consider an example where the picture width is 100. Denote the original values a x1, x2, x3 ... x100. These will be transformed into y1, y2, y3 ... y100. This is done by

y1 = x100 + 2*x1 + x2 ( the wrap around )
y2 = x1 + 2*x2 + x3
y3 = x2 + 2*x3 + x4
.
. continue like this until the end
.
y100 = x99 + 2*x100 + x1 ( wrap again )


This is a system of one hundred equations. The goal now is to invert it:

x1 = some some function of the y's
x2 = some some function of the y's
x3 = some some function of the y's etc.

The hard part is figuring out this inverse. In linear algebra the system of equations is represented by a matrix and inverting the system corresponds to inverting the matrix.

I may give this a try. The question of how much can be recovered is interesting.


ImaginaryHuman(Posted 2010) [#13]
Hmm ok. I pretty much follow. For me though I think I would just have `1 equasion`.. just contiguous memory, and let the end of a row wrap around to the start of the next.


xlsior(Posted 2010) [#14]
I'd think it's a ton easier to iterate through a two-dimensional array containing the pixel values than it'd be to create individual equations...

Worst case, you may need to handle the edges seperately, but the bulk in the middle should be fairly straitforward.


ImaginaryHuman(Posted 2010) [#15]
Yes, but the question is how ;-)


Duckstab[o](Posted 2010) [#16]
from what ive messed with in the past a good solution is to add the needed data to revert within the image

ie 512*512 image padded to 515*512

the First three Colums of the pic are duplicated pure values of the original pic

do a blend from pixel 4
pick a blend formula (Pixel = (Pixel/2)+(Pixelleft/4)+(PixelRight/4)

now from the pure 3 colums you can compare the 4,5,6 colum to gain the blend/blur formular and revert pure whole image

im looking for some old code since i might not have explained very well to much wiskey :)


ImaginaryHuman(Posted 2010) [#17]
Sounds very interesting, I would be interested to see the sourcecode if you are willing. Are the 3 pixels needed if you already KNOW what the blur formula was, or is that still a part of the way that it undoes the blur?


ImaginaryHuman(Posted 2010) [#18]
I tried the blurring technique where you keep the original pixels at the start. I basically did it by only reading pixels to the left of the current pixel, to create the blur, and my blur formula was just basically that every pixel gets the same weight. So if there are 8 pixels in the blur all their values are divided by 8 to get the total. Then to unblur I took the pixels to the left of the current pixel and either subtracted them from the current pixel * 8, or divided them all by 8 and subtracted from the current pixel. The first way creates errors due to loss of decimal precision (the blurred images is in bytes, not the original Doubles, so there is lots of rounding `noises`), and the second way makes the image darker without restoring the proper coloration. So I am a bit at a loss. I could live with the method of multiplying the current pixel by 8 then subtracting the left pixels, but this produces an error difference of between 1 and 8 for every pixel. To get a perfect blur to undo I have to add this to the blur pixel value but then the image is noisy.

Is there a perfect formula for reversing the blur without the noise?


xlsior(Posted 2010) [#19]
Is there a perfect formula for reversing the blur without the noise?


There's always going to be a little noise, since you're losing some of the original precision of the original values.... So it's impossible for any formula to be *perfect*, but depending on the formulas used it might be close anyway to the casual observer to not see the difference.


ImaginaryHuman(Posted 2010) [#20]
Yah but I need it to be exactly like the original. So I obviously have to find some other way to do the blur and store more data so that it can be fully reversed.


xlsior(Posted 2010) [#21]
If any pixel is a mixture of multiple others, you're going to need to store the entire original value somewhere, or you're always going to have rounding issues.

e.g:
pixel 1 = 213.
25% of pixel 2 is made up of pixel 1.
213/4 = 53.25.. but due to rounding it would get rounded down to 53 in pixel 2

When reverting the blur, you'd get 53 * 4 = 212 (instead of the original 213). The final deviation can end up larger due to multiple pixels all affecting eachother, amplifying eachothers inaccuracies.

You can make the error margin smaller by storing your blurred images in a 16-bit imagefile format, but unless you encode all original pixels somehow in the file, you ARE going to lose information. It's going to be impossible to get a 100% binary file match without it, even though there may not be a perceptible difference.

Oh, and something else to consider: If you store your blurred image in any non-lossless fileformat (like JPEG) you can also forget about a 100% reversal, since some information got lost irretrievably during the process of converting it to JPEG in the first place.


ImaginaryHuman(Posted 2010) [#22]
Yah I would store in a lossless format anyway. I will keep researching some other approaches.

Thanks.


Duckstab[o](Posted 2010) [#23]
no luck on the sourcecode as xlsior said from converted images i got on my harddrive over a 1024*768 image there was as much as an 11% diffence in pixels overall and there was artifacts around dramatic light/shade


ImaginaryHuman(Posted 2010) [#24]
Thanks for looking into it. The amount of difference is up to the size of the blur, for a 16-pixel blur I got errors from 0..15 off from what they should be. Oh well.


Floyd(Posted 2010) [#25]
I finally got around to looking at this. For the "seven 1's" weighting the convolution can be inverted if the image width is not a multiple of 7. The inverse depends on the width mod 7, so there are six possibilities. I wrote little test for the "2 Mod 7" case. Note this was done in Blitz3D because I find it easier for little pixel pushing programs.

The inverse works exactly when used on floating point data saved from the blur. It looks rather bad when used with pixels of the blurred image. I think this is inevitable. Exact inversion is impossible in this case as different original images can produce the same blurred output.

The exact inversion runs in a few seconds while the pixel reading version takes a couple of minutes on my rather old PC. Anyway, here's the test program and grayscale image.

http://floyd2.webs.com/BlurTest.bb




ImaginaryHuman(Posted 2010) [#26]
Thanks Floyd, it's interesting, and while I'm not a Blitz3D coder I did read your code and it looks interesting. I dont entirely understand your concept of weighting and modulos and why this all revolves around 7's, but maybe you can explain further.

The performance is similar to previous efforts.. using floats/doubles I can get an exact reversal based on the precise data, but by storing the blurred image as bytes, losing all the precision, it's just not going to be totally reversible without some kind of error/difference. I did try storing the error value by adding it to each blurred pixel as I went along, but this produced a quite grainy/noisy image, but did allow it to be perfectly recovered. Trouble is I need both a) a very smooth blur, and b) perfect reversibility. Maybe not possible? lol

But it's fun trying.


Floyd(Posted 2010) [#27]
I'm using my interpretation of your seven 1's weighting. For each pixel you add the three neighbors on either side, and divide by seven at the end. Everything "wraps around" so left and right endpoints are neighbors.

This is a linear transformation. The inverse, if any, is unique. There is no choice in the matter.

Here's a simple example of how this can fail. Consider the weights 1,2,1 and an image of width 4. The middle pixel gets the 2, and this can be any of the four pixels in a row. The corresponding matrix for this linear transformation is

1 2 1 0
0 1 2 1
1 0 1 2
2 1 0 1 The next shift would put us back where we started.

This has no inverse. Notice that row1 + row3 = row2 + row 4 = 2 2 2 2. Whenever there is a linear relationship between rows, as there is here, the transformation has no inverse. This is because it maps different vectors to the same result and so there is no way undo this result. In this scenario a "vector" just means a row of pixels.

If you are not too weary of research you could investigate circulant matrices. That's what these transformations are, each row being the previous one rotated to the right. They have nice algebraic properties that make them much easier to manipulate than general matrices. For example they commute: M * N = N * M when both are circulant and the same size.