Using local average color of image to reduce difference in lightning












0















I am new to opencv and I am currently working on 'Diabetic Retinopathy Detection' (a kaggle competition was launched 3 years ago; more details here : https://www.kaggle.com/c/diabetic-retinopathy-detection/data). Currently, I am trying to achieve similar results on image processing as depicted in the image below (source: http://blog.kaggle.com/2015/09/09/diabetic-retinopathy-winners-interview-1st-place-ben-graham/):
enter image description here



Now I have tried different approaches including histogram equalization and Contrast Limited Adaptive Histogram Equalization (CLAHE). CLAHE gives the best results so far, but nothing compared to the images above. I got some ideas from here : (How to remove the local average color from an image with OpenCV) but couldn't reproduce the results. If someone can guide me how it can be done with opencv or any other python vision library, it would be great. Sample images can be downloaded from kaggle site (link mentioned above). Thanks.



Here is my code so far:



def equalize_hist(input_path):
img = cv.imread(input_path)
for c in range(0, 2):
img[:,:,c] = cv.equalizeHist(img[:,:,c])

cv.imshow('Histogram equalized', img)
cv.waitKey(0)
cv.destroyAllWindows()

def clahe_rgb(input_path):
bgr = cv.imread(input_path)
lab = cv.cvtColor(bgr, cv.COLOR_BGR2LAB)
lab_planes = cv.split(lab)
gridsize = 5
clahe = cv.createCLAHE(clipLimit=2.0,tileGridSize=(gridsize,gridsize))
lab_planes[0] = clahe.apply(lab_planes[0])
lab = cv.merge(lab_planes)
bgr2 = cv.cvtColor(lab, cv.COLOR_LAB2BGR)
cv.imshow('CLAHE RGB', bgr2)
cv.waitKey(0)
cv.destroyAllWindows()


def clahe_greyscale(input_path):
img = cv.imread(input_path)
gray_image = cv.cvtColor(img, cv.COLOR_BGR2GRAY)
clahe = cv.createCLAHE(clipLimit=2.0, tileGridSize=(8,8))
cl1 = clahe.apply(gray_image)
cv.imshow('CLAHE Grayscale', cl1)
cv.waitKey(0)
cv.destroyAllWindows()









share|improve this question

























  • You should edit your code into the question.

    – barny
    Nov 21 '18 at 22:41











  • Hi. I have posted my code above.

    – Muhammad Irfan Ali
    Nov 22 '18 at 8:33
















0















I am new to opencv and I am currently working on 'Diabetic Retinopathy Detection' (a kaggle competition was launched 3 years ago; more details here : https://www.kaggle.com/c/diabetic-retinopathy-detection/data). Currently, I am trying to achieve similar results on image processing as depicted in the image below (source: http://blog.kaggle.com/2015/09/09/diabetic-retinopathy-winners-interview-1st-place-ben-graham/):
enter image description here



Now I have tried different approaches including histogram equalization and Contrast Limited Adaptive Histogram Equalization (CLAHE). CLAHE gives the best results so far, but nothing compared to the images above. I got some ideas from here : (How to remove the local average color from an image with OpenCV) but couldn't reproduce the results. If someone can guide me how it can be done with opencv or any other python vision library, it would be great. Sample images can be downloaded from kaggle site (link mentioned above). Thanks.



Here is my code so far:



def equalize_hist(input_path):
img = cv.imread(input_path)
for c in range(0, 2):
img[:,:,c] = cv.equalizeHist(img[:,:,c])

cv.imshow('Histogram equalized', img)
cv.waitKey(0)
cv.destroyAllWindows()

def clahe_rgb(input_path):
bgr = cv.imread(input_path)
lab = cv.cvtColor(bgr, cv.COLOR_BGR2LAB)
lab_planes = cv.split(lab)
gridsize = 5
clahe = cv.createCLAHE(clipLimit=2.0,tileGridSize=(gridsize,gridsize))
lab_planes[0] = clahe.apply(lab_planes[0])
lab = cv.merge(lab_planes)
bgr2 = cv.cvtColor(lab, cv.COLOR_LAB2BGR)
cv.imshow('CLAHE RGB', bgr2)
cv.waitKey(0)
cv.destroyAllWindows()


def clahe_greyscale(input_path):
img = cv.imread(input_path)
gray_image = cv.cvtColor(img, cv.COLOR_BGR2GRAY)
clahe = cv.createCLAHE(clipLimit=2.0, tileGridSize=(8,8))
cl1 = clahe.apply(gray_image)
cv.imshow('CLAHE Grayscale', cl1)
cv.waitKey(0)
cv.destroyAllWindows()









share|improve this question

























  • You should edit your code into the question.

    – barny
    Nov 21 '18 at 22:41











  • Hi. I have posted my code above.

    – Muhammad Irfan Ali
    Nov 22 '18 at 8:33














0












0








0


1






I am new to opencv and I am currently working on 'Diabetic Retinopathy Detection' (a kaggle competition was launched 3 years ago; more details here : https://www.kaggle.com/c/diabetic-retinopathy-detection/data). Currently, I am trying to achieve similar results on image processing as depicted in the image below (source: http://blog.kaggle.com/2015/09/09/diabetic-retinopathy-winners-interview-1st-place-ben-graham/):
enter image description here



Now I have tried different approaches including histogram equalization and Contrast Limited Adaptive Histogram Equalization (CLAHE). CLAHE gives the best results so far, but nothing compared to the images above. I got some ideas from here : (How to remove the local average color from an image with OpenCV) but couldn't reproduce the results. If someone can guide me how it can be done with opencv or any other python vision library, it would be great. Sample images can be downloaded from kaggle site (link mentioned above). Thanks.



Here is my code so far:



def equalize_hist(input_path):
img = cv.imread(input_path)
for c in range(0, 2):
img[:,:,c] = cv.equalizeHist(img[:,:,c])

cv.imshow('Histogram equalized', img)
cv.waitKey(0)
cv.destroyAllWindows()

def clahe_rgb(input_path):
bgr = cv.imread(input_path)
lab = cv.cvtColor(bgr, cv.COLOR_BGR2LAB)
lab_planes = cv.split(lab)
gridsize = 5
clahe = cv.createCLAHE(clipLimit=2.0,tileGridSize=(gridsize,gridsize))
lab_planes[0] = clahe.apply(lab_planes[0])
lab = cv.merge(lab_planes)
bgr2 = cv.cvtColor(lab, cv.COLOR_LAB2BGR)
cv.imshow('CLAHE RGB', bgr2)
cv.waitKey(0)
cv.destroyAllWindows()


def clahe_greyscale(input_path):
img = cv.imread(input_path)
gray_image = cv.cvtColor(img, cv.COLOR_BGR2GRAY)
clahe = cv.createCLAHE(clipLimit=2.0, tileGridSize=(8,8))
cl1 = clahe.apply(gray_image)
cv.imshow('CLAHE Grayscale', cl1)
cv.waitKey(0)
cv.destroyAllWindows()









share|improve this question
















I am new to opencv and I am currently working on 'Diabetic Retinopathy Detection' (a kaggle competition was launched 3 years ago; more details here : https://www.kaggle.com/c/diabetic-retinopathy-detection/data). Currently, I am trying to achieve similar results on image processing as depicted in the image below (source: http://blog.kaggle.com/2015/09/09/diabetic-retinopathy-winners-interview-1st-place-ben-graham/):
enter image description here



Now I have tried different approaches including histogram equalization and Contrast Limited Adaptive Histogram Equalization (CLAHE). CLAHE gives the best results so far, but nothing compared to the images above. I got some ideas from here : (How to remove the local average color from an image with OpenCV) but couldn't reproduce the results. If someone can guide me how it can be done with opencv or any other python vision library, it would be great. Sample images can be downloaded from kaggle site (link mentioned above). Thanks.



Here is my code so far:



def equalize_hist(input_path):
img = cv.imread(input_path)
for c in range(0, 2):
img[:,:,c] = cv.equalizeHist(img[:,:,c])

cv.imshow('Histogram equalized', img)
cv.waitKey(0)
cv.destroyAllWindows()

def clahe_rgb(input_path):
bgr = cv.imread(input_path)
lab = cv.cvtColor(bgr, cv.COLOR_BGR2LAB)
lab_planes = cv.split(lab)
gridsize = 5
clahe = cv.createCLAHE(clipLimit=2.0,tileGridSize=(gridsize,gridsize))
lab_planes[0] = clahe.apply(lab_planes[0])
lab = cv.merge(lab_planes)
bgr2 = cv.cvtColor(lab, cv.COLOR_LAB2BGR)
cv.imshow('CLAHE RGB', bgr2)
cv.waitKey(0)
cv.destroyAllWindows()


def clahe_greyscale(input_path):
img = cv.imread(input_path)
gray_image = cv.cvtColor(img, cv.COLOR_BGR2GRAY)
clahe = cv.createCLAHE(clipLimit=2.0, tileGridSize=(8,8))
cl1 = clahe.apply(gray_image)
cv.imshow('CLAHE Grayscale', cl1)
cv.waitKey(0)
cv.destroyAllWindows()






python opencv computer-vision






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Nov 22 '18 at 8:32







Muhammad Irfan Ali

















asked Nov 21 '18 at 17:52









Muhammad Irfan AliMuhammad Irfan Ali

316




316













  • You should edit your code into the question.

    – barny
    Nov 21 '18 at 22:41











  • Hi. I have posted my code above.

    – Muhammad Irfan Ali
    Nov 22 '18 at 8:33



















  • You should edit your code into the question.

    – barny
    Nov 21 '18 at 22:41











  • Hi. I have posted my code above.

    – Muhammad Irfan Ali
    Nov 22 '18 at 8:33

















You should edit your code into the question.

– barny
Nov 21 '18 at 22:41





You should edit your code into the question.

– barny
Nov 21 '18 at 22:41













Hi. I have posted my code above.

– Muhammad Irfan Ali
Nov 22 '18 at 8:33





Hi. I have posted my code above.

– Muhammad Irfan Ali
Nov 22 '18 at 8:33












1 Answer
1






active

oldest

votes


















1














The code you show is doing a local histogram equalization, whereas the highlighted text you posted talks about removing the average color from each pixel.



Removing the average color could be done like this:



# Blur the image
blurred = cv2.blur(img, ksize=(15, 15))

# Take the difference with the original image
# Weight with a factor of 4x to increase contrast
dst = cv2.addWeighted(img, 4, blurred, -4, 128)


You can adjust the kernel size of the blur code (above it's 15) to find something that works for your use case.



You may need to downscale the images to a common size before doing this, to get comparable results (as also noted in the blog post you cite).






share|improve this answer


























  • It gives me an image with dark grey shade all over it. Were you able to get similar output as I posted above ?

    – Muhammad Irfan Ali
    Nov 22 '18 at 14:39











  • I don't have the original kaggle data. Did you try downsizing the image or increasing the kernel size?

    – w-m
    Nov 22 '18 at 14:42











  • Hi. yes I have. With increasing kernel size, the lines become sharp, but CLAHE results are even better than that. For testing, you can download one image from the 'sample.zip' of kaggle link I pasted above (it's ~10 Mb of data I think).

    – Muhammad Irfan Ali
    Nov 22 '18 at 14:55






  • 1





    I've had another look at the blog post. It shows excerpts from a technical report that can also be found online: kaggle.com/blobs/download/forum-message-attachment-files/2795/… - on page 6 you get the Python source code.

    – w-m
    Nov 22 '18 at 15:11











  • The code there increases the contrast by using a weighted difference (with a factor of 4). I've updated the code in the answer to do the same.

    – w-m
    Nov 22 '18 at 15:12











Your Answer






StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53417926%2fusing-local-average-color-of-image-to-reduce-difference-in-lightning%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes









1














The code you show is doing a local histogram equalization, whereas the highlighted text you posted talks about removing the average color from each pixel.



Removing the average color could be done like this:



# Blur the image
blurred = cv2.blur(img, ksize=(15, 15))

# Take the difference with the original image
# Weight with a factor of 4x to increase contrast
dst = cv2.addWeighted(img, 4, blurred, -4, 128)


You can adjust the kernel size of the blur code (above it's 15) to find something that works for your use case.



You may need to downscale the images to a common size before doing this, to get comparable results (as also noted in the blog post you cite).






share|improve this answer


























  • It gives me an image with dark grey shade all over it. Were you able to get similar output as I posted above ?

    – Muhammad Irfan Ali
    Nov 22 '18 at 14:39











  • I don't have the original kaggle data. Did you try downsizing the image or increasing the kernel size?

    – w-m
    Nov 22 '18 at 14:42











  • Hi. yes I have. With increasing kernel size, the lines become sharp, but CLAHE results are even better than that. For testing, you can download one image from the 'sample.zip' of kaggle link I pasted above (it's ~10 Mb of data I think).

    – Muhammad Irfan Ali
    Nov 22 '18 at 14:55






  • 1





    I've had another look at the blog post. It shows excerpts from a technical report that can also be found online: kaggle.com/blobs/download/forum-message-attachment-files/2795/… - on page 6 you get the Python source code.

    – w-m
    Nov 22 '18 at 15:11











  • The code there increases the contrast by using a weighted difference (with a factor of 4). I've updated the code in the answer to do the same.

    – w-m
    Nov 22 '18 at 15:12
















1














The code you show is doing a local histogram equalization, whereas the highlighted text you posted talks about removing the average color from each pixel.



Removing the average color could be done like this:



# Blur the image
blurred = cv2.blur(img, ksize=(15, 15))

# Take the difference with the original image
# Weight with a factor of 4x to increase contrast
dst = cv2.addWeighted(img, 4, blurred, -4, 128)


You can adjust the kernel size of the blur code (above it's 15) to find something that works for your use case.



You may need to downscale the images to a common size before doing this, to get comparable results (as also noted in the blog post you cite).






share|improve this answer


























  • It gives me an image with dark grey shade all over it. Were you able to get similar output as I posted above ?

    – Muhammad Irfan Ali
    Nov 22 '18 at 14:39











  • I don't have the original kaggle data. Did you try downsizing the image or increasing the kernel size?

    – w-m
    Nov 22 '18 at 14:42











  • Hi. yes I have. With increasing kernel size, the lines become sharp, but CLAHE results are even better than that. For testing, you can download one image from the 'sample.zip' of kaggle link I pasted above (it's ~10 Mb of data I think).

    – Muhammad Irfan Ali
    Nov 22 '18 at 14:55






  • 1





    I've had another look at the blog post. It shows excerpts from a technical report that can also be found online: kaggle.com/blobs/download/forum-message-attachment-files/2795/… - on page 6 you get the Python source code.

    – w-m
    Nov 22 '18 at 15:11











  • The code there increases the contrast by using a weighted difference (with a factor of 4). I've updated the code in the answer to do the same.

    – w-m
    Nov 22 '18 at 15:12














1












1








1







The code you show is doing a local histogram equalization, whereas the highlighted text you posted talks about removing the average color from each pixel.



Removing the average color could be done like this:



# Blur the image
blurred = cv2.blur(img, ksize=(15, 15))

# Take the difference with the original image
# Weight with a factor of 4x to increase contrast
dst = cv2.addWeighted(img, 4, blurred, -4, 128)


You can adjust the kernel size of the blur code (above it's 15) to find something that works for your use case.



You may need to downscale the images to a common size before doing this, to get comparable results (as also noted in the blog post you cite).






share|improve this answer















The code you show is doing a local histogram equalization, whereas the highlighted text you posted talks about removing the average color from each pixel.



Removing the average color could be done like this:



# Blur the image
blurred = cv2.blur(img, ksize=(15, 15))

# Take the difference with the original image
# Weight with a factor of 4x to increase contrast
dst = cv2.addWeighted(img, 4, blurred, -4, 128)


You can adjust the kernel size of the blur code (above it's 15) to find something that works for your use case.



You may need to downscale the images to a common size before doing this, to get comparable results (as also noted in the blog post you cite).







share|improve this answer














share|improve this answer



share|improve this answer








edited Nov 22 '18 at 16:07

























answered Nov 22 '18 at 13:46









w-mw-m

6,1292333




6,1292333













  • It gives me an image with dark grey shade all over it. Were you able to get similar output as I posted above ?

    – Muhammad Irfan Ali
    Nov 22 '18 at 14:39











  • I don't have the original kaggle data. Did you try downsizing the image or increasing the kernel size?

    – w-m
    Nov 22 '18 at 14:42











  • Hi. yes I have. With increasing kernel size, the lines become sharp, but CLAHE results are even better than that. For testing, you can download one image from the 'sample.zip' of kaggle link I pasted above (it's ~10 Mb of data I think).

    – Muhammad Irfan Ali
    Nov 22 '18 at 14:55






  • 1





    I've had another look at the blog post. It shows excerpts from a technical report that can also be found online: kaggle.com/blobs/download/forum-message-attachment-files/2795/… - on page 6 you get the Python source code.

    – w-m
    Nov 22 '18 at 15:11











  • The code there increases the contrast by using a weighted difference (with a factor of 4). I've updated the code in the answer to do the same.

    – w-m
    Nov 22 '18 at 15:12



















  • It gives me an image with dark grey shade all over it. Were you able to get similar output as I posted above ?

    – Muhammad Irfan Ali
    Nov 22 '18 at 14:39











  • I don't have the original kaggle data. Did you try downsizing the image or increasing the kernel size?

    – w-m
    Nov 22 '18 at 14:42











  • Hi. yes I have. With increasing kernel size, the lines become sharp, but CLAHE results are even better than that. For testing, you can download one image from the 'sample.zip' of kaggle link I pasted above (it's ~10 Mb of data I think).

    – Muhammad Irfan Ali
    Nov 22 '18 at 14:55






  • 1





    I've had another look at the blog post. It shows excerpts from a technical report that can also be found online: kaggle.com/blobs/download/forum-message-attachment-files/2795/… - on page 6 you get the Python source code.

    – w-m
    Nov 22 '18 at 15:11











  • The code there increases the contrast by using a weighted difference (with a factor of 4). I've updated the code in the answer to do the same.

    – w-m
    Nov 22 '18 at 15:12

















It gives me an image with dark grey shade all over it. Were you able to get similar output as I posted above ?

– Muhammad Irfan Ali
Nov 22 '18 at 14:39





It gives me an image with dark grey shade all over it. Were you able to get similar output as I posted above ?

– Muhammad Irfan Ali
Nov 22 '18 at 14:39













I don't have the original kaggle data. Did you try downsizing the image or increasing the kernel size?

– w-m
Nov 22 '18 at 14:42





I don't have the original kaggle data. Did you try downsizing the image or increasing the kernel size?

– w-m
Nov 22 '18 at 14:42













Hi. yes I have. With increasing kernel size, the lines become sharp, but CLAHE results are even better than that. For testing, you can download one image from the 'sample.zip' of kaggle link I pasted above (it's ~10 Mb of data I think).

– Muhammad Irfan Ali
Nov 22 '18 at 14:55





Hi. yes I have. With increasing kernel size, the lines become sharp, but CLAHE results are even better than that. For testing, you can download one image from the 'sample.zip' of kaggle link I pasted above (it's ~10 Mb of data I think).

– Muhammad Irfan Ali
Nov 22 '18 at 14:55




1




1





I've had another look at the blog post. It shows excerpts from a technical report that can also be found online: kaggle.com/blobs/download/forum-message-attachment-files/2795/… - on page 6 you get the Python source code.

– w-m
Nov 22 '18 at 15:11





I've had another look at the blog post. It shows excerpts from a technical report that can also be found online: kaggle.com/blobs/download/forum-message-attachment-files/2795/… - on page 6 you get the Python source code.

– w-m
Nov 22 '18 at 15:11













The code there increases the contrast by using a weighted difference (with a factor of 4). I've updated the code in the answer to do the same.

– w-m
Nov 22 '18 at 15:12





The code there increases the contrast by using a weighted difference (with a factor of 4). I've updated the code in the answer to do the same.

– w-m
Nov 22 '18 at 15:12




















draft saved

draft discarded




















































Thanks for contributing an answer to Stack Overflow!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53417926%2fusing-local-average-color-of-image-to-reduce-difference-in-lightning%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

If I really need a card on my start hand, how many mulligans make sense? [duplicate]

Alcedinidae

Can an atomic nucleus contain both particles and antiparticles? [duplicate]