We are making a robot play chess against a human, and thus we need to see the moves the human makes.
The most intuitive is to take an image before the human move
The two columns shows a black pieces move, the left column shows a black pawn move from a white square to another white square, the right column shows a black pawn move from a black to a black square.
First we have before the move
![](https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgA3ITnPN42UC9kd1NC3sRta4T08StRxKH5Q7CQdnfEa6UKXdkIC0YtkltrxB1WQf2MoaAWF2SCUIG09HsFIG4ONoFgxtPh7YrW0HoZ7JrtiXXeMckuoJN7cXG0WUV9P4vGC6oaWMbs34E/s400/I_0.png)
![](https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjF1Q2WeeZ93IBdxJkWvA7NutJKcl5VbHaAkB5Hr82KQd38SOKA-BT5cmV1hB45qZCgd6s_4AUmVlsjsXSaaigXmibf3l-8jO_UdLeRJ5JsrvLJ0CNvSTUWS0iqEygJwxAT34ppCG0yYHw/s400/I_1.png)
![](https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjzjn2MWKbwPEDnSngekHE6C18QPsGedGJNkZbfVRccotrKdoLO7ba47I9-SqpAT37yfWRirlZtiG0yHgchfsValM8MymbWY1MdArBuxgdWjIGRQGzrlouqZyK7KFuPn7xV8sQ8ISdXpe0/s400/Delta_1.png)
The above little test shows that it should be easy to detect piece movement (It works equally well with white pieces).
Obviously things look different when there are changes in illumination, we are designing this system to be used for exhibitions, therefore we assumes that there is going to be some changes in the global illumination, although we will set up light and provide a somewhat stable illumination basis. The following two images where taken within a few minutes.
![](https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhh-U_0cqN5j6SCon4_CGd5ukLLtHtv71f51uvBI6W_nFGdV5uhMxCtj6Znu3kP_bFz3gcn_05214juHbFukACHCtKEflas1CZ7QqGRQ9xshcTJL47qsB8uB40YGMFIEiJYQR4hyYGFG7w/s400/light_problems.png)
and the difference:
![](https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjgUFPI_xs3KlPA3XBLNB6vM0mzeSLjo8swfAIYykmiWHskn4UmUGloXyfT2Eaxk6A-5DNtrL0c9-ZlkrIJaJUc0aIYMk23Y2lJ7WFCDirF9Lr3FtH8QFIGsuLUZffGOUBOwcoHABTosdM/s400/Delta_light_problems.png)
Thresholding the image yields:
![](https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiDv9dGPBz2yrEaPp6b6XIF-OeqSQiczQ2Qh7n3mT-UI28Hh_JR0SU9L5hv1zmUOVQr6e9gYxdi7lMoJ_Ia46gj18J9cFJOqOTTuh1B_rcpE_9x21GSmB4V1Q40F8ifSBwL3eeL_ogR3eU/s400/Delta_light_problems_thresh.png)
I really find it difficult to believe that this simple approach is stable when global illumination changes more than what I have sample images for right now. To compensate for the illumination problems, I hope that the difference off edge images of before and after are more robust for global light illumination, lets see how it looks.
first the difference of the two black pawns for comparison:
![](https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhiE3zGS3lH9lf5oVKajs7sa8gPaUAMlceHJ_x2SP22wU91X-fXTABW51yiDp4GfBW_BxihnpXRHjiYu318DzgpIrsoLkGUWCmAomVwOmAwpqYlx6g3OE4KMQIGAhrdfTvkR2HF_qxwu7o/s400/Delta_1_E.png)
Now on the image with the illumination problems, I have thresholded the images right away,
![](https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiUAsm4_D2RZEPO6AEKJS9QvqXhjnRwAjHzVZpZLj0fKlutjijuEtZWCAFUhzopNa3oEq6VJC_d_S-qGfM-uh6bgBoSX9fl2eFluOkzwR91fEdIay7saLKmxKEQnpjHgD10p2gtqsDCoMs/s400/light_problems_E.png)
![](https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiOSwhXbdArNwQK2P2VB7IaCIk5dBUwqaojVP3HlbLcrfyMOI_DrWSa1SRLbMz_YIC39mbfeK7q4JudpMCS-hF7cDgC3_yX7LqRu2HLg-vFXaHWRFRG8JfReoNV1nKk6KgUftJ99IsQGYI/s400/light_problems_E_multi_level.png)
A tuned circle Hough transform reveals the following circles in the problematic images using intensity and a static threshold of 0.1 (intensity in the range [0:1]), the read areas are where the pieces where before, and the yellow are where they are after. The Hough transform has been tuned to not making any false negatives.
From the above image, we can see that the size of some of the circles doesn't fit with the size of the pieces. I expect that further processing can find the correct size, and also negate false positives
Using the thresholded difference of edges as input for the hough transform yield:
Once again, the parameters off the hough transform was tuned to make no false negatives, but to capture all positives, far more false postives are produces compared to using the intensity difference image. The sensitivity off the hough transform was actually slightly decreased, but the image was smoothed several times for achieving the desired result.
Conclusion: I would say that both off the discussed methods looks promising for identifying areas off interest for more complex processing.
The following problems need to be solved next:
- Investigate how this works when illumination changes
- Kill false positives
- Identify the square that the change are in so the chess engine can be informed about the move
First we detect the area of interest by threshing the difference of edges
Then we dilating with a 3x3 square kernel 9 times and find connected component in the dilated image, components with smaller width or height than an pawn radius are discarded. We are only interested in pieces which are moved between squares, evaluating
This leaves us only with the squares pieces have moved from and to which are handled by logic.
3 comments:
Hi,
I ask you such a question since you seems to master opencv..
I try to use opencv function cvHoughCircles in python.
But I can't work with the return of the funtion.
ex : p = cvHoughCircles(..)
The only thing I can do is :
p.total => return the number of circle detected
I want to get the center and the radius.
p[0] return an error : unindexable objet
Do you known how to use this function ? I search on the web but there is no result !
Can you send me the correct syntax at zbiolb@yahoo.fr
Thank you
Hi, this is a very late response indeed, but haven't been at my blog for a long time...
anyways, I Don't remember how to use cvHoughCircles with python, only thing I remember is that the documentation for the python wrapping to openCV aint the best. I hope you already have figured it out. otherwise I can tell you that the C documentation for openCV is very good.
Nice dispatch and this post helped me alot in my college assignement. Gratefulness you seeking your information.
Post a Comment