Code archives/Graphics/Monochrome

This code has been declared by its author to be Public Domain code.

Download source code

Monochrome by xlsior2003
Converts a color image to greyscale
;
; Monochrome -- A function that converts the contents backbuffer() to greyscale.
; 11/23/2003, by Marc van den Dikkenberg / xlsior
; 
; Usage: monochrome(perct#)
;		 perct# is the level of the effect, in percentages.
;		 0 means no change, while 100 is pure greyscales.
;
; Note: The fade itself is probably to slow to be useful, 
; and has been included for educational purposes only.
;

Graphics 640,480,16,2
SetBuffer BackBuffer()

gfx$=LoadImage("x:\monkey6.jpg")
DrawImage gfx$,0,0
Flip
WaitKey()

For perct=10 To 100 Step 10
	DrawImage gfx$,0,0
	Monochrome(perct)
	Flip
Next

fntArial=LoadFont("Arial",24,True,False,False)
SetFont fntarial
Color 255,128,0
Rect 200,220,240,40
Color 255,255,255
Text 260,230,"Intermission"
FreeFont fntarial
Flip
WaitKey()

For perct=100 To 10 Step -10
	DrawImage gfx$,0,0
	Monochrome(perct)
	Flip
Next

WaitKey()
End


Function Monochrome(perct#)
	SetBuffer BackBuffer()
	LockBuffer
	For y=0 To 479 
		For x=0 To 639
			temp1=ReadPixel(x,y)
			
			orgb=(temp1 And $FF)
			orgg=(temp1 And $FF00) Shr 8
			orgr=(temp1 And $FF0000) Shr 16
			desb=((orgr*0.299)+(orgg*0.587)+(orgb*0.114)) 
		desr=orgr*(1-(perct#/100))+desb*(perct#/100) 
		desg=orgg*(1-(perct#/100))+desb*(perct#/100) 
			desb=orgb*(1-(perct#/100))+desb*(perct#/100) 

			WritePixel x,y,desb+(desg Shl 8)+(desr Shl 16)
		Next 
	Next 
	UnlockBuffer
End Function

Comments

MathyDude2013
Maybe if you replace
[bbcode]

temp1=ReadPixel(x,y)

orgb=(temp1 And $FF)
orgg=(temp1 And $FF00) Shr 8
orgr=(temp1 And $FF0000) Shr 16
desb=((orgr*0.299)+(orgg*0.587)+(orgb*0.114))
desr=orgr*(1-(perct#/100))+desb*(perct#/100)
desg=orgg*(1-(perct#/100))+desb*(perct#/100)
desb=orgb*(1-(perct#/100))+desb*(perct#/100)

WritePixel x,y,desb+(desg Shl 8)+(desr Shl 16)

[/bbcode]

with

[bbcode]

temp1=ReadPixelfast(x,y)

orgb=(temp1 And $FF)
orgg=(temp1 And $FF00) Shr 8
orgr=(temp1 And $FF0000) Shr 16
desb=((orgr*0.299)+(orgg*0.587)+(orgb*0.114))
desr=orgr*(1-(perct#/100))+desb*(perct#/100)
desg=orgg*(1-(perct#/100))+desb*(perct#/100)
desb=orgb*(1-(perct#/100))+desb*(perct#/100)

WritePixelfast x,y,desb+(desg Shl 8)+(desr Shl 16)

[/bbcode]

then it would go faster!


virtlands2013
I just tried your progam.

That's an interesting grey-scale algorithm: Gray = (Red * 0.299 + Green * 0.587 + Blue * 0.114)

I found this website that has lots of free image effect and filters algorithms.
http://www.tannerhelland.com/programming-directory/


xlsior2013
That's an interesting grey-scale algorithm: Gray = (Red * 0.299 + Green * 0.587 + Blue * 0.114)


It's the proper formula for luminosity -- Since the human eye has different sensitivities to light in the red/green/blue ranges of the spectrum, you can'
t just average them out but have to use the weighted valued for the apparent luminosity to remain the same.

More info: http://en.wikipedia.org/wiki/Luma_(video)

(There's other things that make use of those facts: More than half of the information you observe comes from the green part of the spectrum, less than a third from the red, and barely 1/9th from the blue. JPEG image compression makes use of that fact, and uses chroma sub-sampling as part of the compression algorithm.
It seperates the source image into its discrete red, green and blue layers, and compresses them individually. It uses a conservative compression on the green channel, and very agressive compression on the blue channel to get you the smallest file size with the least visual impact caused by the lossy (lost) information.

http://photo.net/learn/jpeg/


Here's a wild idea of mine that may be faster for doing a massive grey-scale conversion on 24-bit images. >>

Since there are 16777216 colors (in 24bits), then pre-compute the grey scales of all 16777216 colors,
and store it in a massive array.


You know, never really considered doing that, but you're right -- the amount of data is not that big, and it would safe a lot of floating point divisions. could be worth checking out.

(Of course, I posted the source above almost a decade ago. In blitzmax, there's much faster ways of accomplishing this feature in real-time, using various blitzmax modules that make use of either shaders or other features of modern 3D accelerator cards)


virtlands2013
O.K. I see now. Just had an idea for optimizing it,
will be back in a while...

I just figured that there is a drawback with pre-computing all grey-codes of all 16777216 colors (for 1 photo),
because most photos, (PNG, JPG, etc.) only use a small fraction of the 16777216 colors.


xlsior2013
I just figured that there is a drawback with pre-computing all grey-codes of all 16777216 colors (for 1 photo),
because most photos, (PNG, JPG, etc.) only use a small fraction of the 16777216 colors.


Easiest way of storing it would probably be a straight (255,255,255) array. That'd be 16 MB as a byte array, or 64MB as an integer array (which will probably be faster)


virtlands2013
I had some fun trying to optimize your grey code ideas for speed.

I had this idea of dividing up an image into equal tasks spread across computer cores (=threading).
The threads would run parallel to each other, which would make the program run fast.

Each thread (or core) would devote itself to unique lines of the image, in an interlaced sort of way.

My regular optimized code (Version A) worked fine,
but my experiment with threading (Version B) didn't work at all.

I have a hunch that the WritePixel command cannot be used in threads, and that's why it won't work.

If you want to see the (wip) codes you can, here are both Versions, A & B:
http://tinyurl.com/d7kcxhq

You'll also need the FastPointer.DLL + FreeImage.DLL + .DECLS
http://tinyurl.com/czrvopm

My great idea was the grey-array, which used a giant array
of precomputed greys:
;; Grey values are computed as they occur and stored in the huge "greycode()" array.
;; For quickness, this does not compute the entire 16777216 collection,
;; but only those that exist in the specified image.
;; Also, greycodes that already exist do not need to be recalculated...
;;
Function GreyArray(i.it)
Local imW_ = i\W -1
Local imH_ = i\H -1

Local fi = i\fi
Local save_buf, im_buf, R,G,B

save_buf = GraphicsBuffer() ; save the currently active buffer
im_buf = ImageBuffer( i\im )

SetBuffer im_buf ;; set the buffer to the image buffer

LockBuffer
For y=0 To imH_
For x=0 To imW_
pixel = ReadPixelFast(x,y) And $00FFFFFF ;; remove the "alpha" from an image.

If greycode(pixel) ;; If duplicate pixels occur, then no need to recompute.
Else
R = (pixel And $FF0000) Shr 16
G =(pixel And $00FF00) Shr 8
B =(pixel) And $FF
greycode(pixel) = (R*0.299)+(G*0.587)+(B*0.114)
End If
Next
Next

UnlockBuffer
SetBuffer save_buf
End Function



;--------------------------------------------------
I'm not sure, but I think the 2 arrays

DIM A(255,255,255)
DIM B(16777216)

both take the same bytes, which is 67108864 = 68 MB
because multidimensional arrays in B3D take LONGS.

A Long = an INT = 4 bytes = 32 bitties.

You can perhaps store the data in a bank instead,
and use 3 bytes per pixel, (getting rid of the high-byte)

b = createbank(16777216*3) ; taking approx 50MB

The drawback would be that retrieval of each 'grey' would be more complex.


Code Archives Forum