Most of our time this week has been spent discussing various solutions to our current programming errors and discussing ways to truly test our hypothesis
This is a breakdown of how we might analyze our images produced using our hypothesis.
Analogous vs. complimentary colors
Simply put, we should encourage analogous colors to form as well as about one complimentary color. But how do these translate to RBG values?
In the Matisse example we can further break down the colors into their RGB values:
Matisse Red: 216, 004, 001
Matisse Yellow: 254, 194, 59
Matisse Green: 78, 160, 114
Matisse Blue: 68, 145, 203
As we can see, the Matisse red is basically pure red on the RGB scale. It stands out from the rest in just how pure of a red it is, with basically all of its value being in red and very little else. This is our “complimentary” color.
When we average the large, medium, and small numbers in our other colors we get values of approximately 205, 151, 68, where the number placement in R or B or G doesn’t matter.
A more precise testable hypothesis might say:
A good picture is one that has one “complimentary” color, or a color with one high RGB value, and two “analogous” colors, or colors with RGB values whose highest RG or B value is a similar value to the other, and their smallest is a similar value to the other.
Development of larger shapes and lines
We want to encourage shapes and lines in our images, but going about this might be tricky. To analyze the aesthetic value of an image, we can use edge detection.
To implement edge detection we will have to go through the image and record the number of times that a pixels RGB value changes drastically. When this happens, it will count as an “edge” that has been found.
The more edges, however, is not better. The static from an old TV screen would have a lot of edges, but not necessarily a lot of larger shapes and lines. Instead, we can count the number of times a edge appears from pixel to pixel, and the less edges that appear the better. A hypothesis for this might be:
A good picture is one that’s total number of edges is around x.
To find the number x, we should program an edge detector and run it on our pixelated Matisse painting.
Right now our L-System makes a static tree appear on the screen. Interaction is limited, wherever the use clicks on screen merely acts as where the tree starts. Soon we will edit this tree to begin appearing more like the tree we originally discussed back in this post. However, already we have set values used in this tree in mind that can be further generalized.
Work also progresses on cellular automata, and that will be mostly my focus for this week. I have implemented the storing and retrieving function mentioned previously and will now be working on getting a version of it up and running with rules.
Meeting + New Goals
My partner Taylor Baer and I met today to start discussing our new long term and short term goals for this project. We discussed our prior long-term goal and whether or not it has been met, and reorganized ourselves for future goals.
Although we found out we did not meet our goal for this month, as our hypothesis’s were not developed on time, we discovered that we were ahead of our second goal to develop the program accordingly. This is a mix of both good and bad news, but Taylor has brought up ideas to implement that will better keep us on track for this month.
Furthermore, we have set new long-term and short-term goals for ourselves and each other.
Long-term: Our long-term goals for this month is to finish cellular automata, start implementing CGP, and to finish developing our independent hypotheses and bring them together.
Short-term: In the first half of the month we would like our hypotheses complete and brought together.
I myself will be looking into implementing cellular automata and edge detection.