Reading in the file requires parsing the outputted text back into something that the code can understand.
Essentially, our image is a series of rules, most of which are stored in a dictionary. However, in the database they are stored in a string, so to read them back in each column in my database needs a corresponding parsing function.
ParseCells is my attempt at this; a class written to contain function for the express purpose of parsing.
Now, it is time to see if what my functions have parsed will be accepted by a cells object, and if so, if I can manipulate them to recreate the image from these rules.
Creation of a Cell
Reading a cell in turned out to be more effort than just parsing and creating a new cell object. Once I had parsed everything through and created a new cell object, I ran into two problems.
The newly created cell was not exactly the same
Iterating was broken
Both these problems occurred because of how the code was previously written.
Debugging Recreating a Cell
Discovering faulty functionality in exporting
The first one it turned out was due partially to the way I was creating cellular automata. I turns out, a small section of code that I presume was supposed to be commented out wasn’t, which meant as the cellular automata progressed it randomized itself. This change was not recorded in the output, and caused the automata produced to share the same seeds, colors, and starting ruleset, becoming similar, but not the same.
However, removing the code where it randomizes itself makes it so that the cells can no longer iterate correctly.
This iteration is something that I have been fighting with for awhile now. I successfully wrote a function that appears to work for exporting, and automatically iterates as needs without randomization.
However, the iteration for reading in is very much broken. I know that reading in is happening correctly from the database, and that the drawing function works, however, somewhere between Cells and Pattern I am running into issues.
Odd functionality is being displayed where it *sometimes* works.
About every 1/10th times (but not predictably, it could be anywhere from 1/7 -> 1/15) the program will recreate the image perfect.
However, most of the time, it recreates something like this:
This ended up being because seeds was set before colors, when seeds depends on the size of colors to be set correctly. This was a very small detail, but explained how sometimes it would work (the data was then in memory), and sometimes it wouldn’t.
Mutate + Score + Insert into Database
Ironically, after making sure both image outputted and inputted were the same, I now need to make sure the new one is slightly different. This will be our mutation step in our genetic algorithm.
This step I ran into less trouble, and was able to successfully mutate the image using a premade function of mine, score it, and insert in back into the database.
Two cool things happen here:
The image randomizes itself, including rules and colors
The mutated image gets a change to become the “new best” image, and be mutated instead of the old one.
Unfortunately, the latter requires many clicks. This needs to be streamlined by having the program pick the top ~10 best images and mutate them.
Mutating Ten Different Images
This step is less than thrilling, as it involves reading in ten images and following the exact same steps above.
However, its introduction means that it is much easier to export mutant automata’s in bulk.
The results are in from the survey, and a brief overview of it shows a couple things:
The grey automatas that scored worse by machine were always rated better by participants.
These results seem discouraging and maybe all over the place, however, the ending questions were written for this scenario. In case our score was not validated by people, we wrote questions designed to ask the viewer:
What color scheme do you prefer?
What sort of shape breakup do you like seeing on a page?
The results of this were that people preferred similar color schemes, and that they did not like perfectly symmetrical shapes or the golden ratio.
Rather, participants preferred this breakup of shapes, what I dub the “complex 1/3rd rule of composition”
Export does 10 mutations, is this number decent?
Export changes 3 values in dead and alive, is this too much?
Export only randomizes a portion of the colors, is this too little?
Should iterations be mutated at all?
Should the seeds be mutated at all? (not implemented)
With new implementations, come new buttons in our GUI.
Our Score button has been removed, scoring is now automated with export, an image does not leave the program without being scored.
This leaves room for additional buttons
View Cells: Which prints out the name of the cellular automata and the score given in the Eclipse console.
Delete Bad: Which removes all automata with scores of zero from the database.
Delete All: Which completely clears the Cells table and all values in it.
Below, we see how all these buttons can work together. We can see the automata that have been automatically exported to the database. Afterwards, we can purge the worst scores and view them again. Finally, we can delete everything altogether.
Now that things are in our database, and we understand how to manipulate them, I can move on to the next step, actually reading things from our database back in and mutating the rulesets.
This is going to be a complex process, so before I start, I will begin to map out the steps I will have to go through to make this work.
Read in the file
Only read in the good scores, is there an SQLite Query for this?
Parse the rules into a new Cell() object
A new parsing object will need to be created that will take in the fields of the chosen automata, and put them back into the dictionary formats that the program can understand.
Cell() will need to be modified to take in these dictionaries to create a new Cell() object.
Mutate the new Cell() object
Call Cell.change() for our new cell. This pre-existing function will mutate it.
Create the Cell() object as an image
This is potentially the most complex step, as a lot of things went into making cellular automata to appear on the screen.
MainController needs to set its global seeds and cells values to be the newly created cells.
drawColorAutomata() should be called.
Everything happening in the fetchButton() needs to happen, except where it gets iterations from the GUI. The iterations need to be the ones from our automata instead.
Score said image
Scoring should not iterate through all the images, but only score the one on the screen.
Put back in database with new name
This new score, and the new rules, should be put in the database with a new unique automata name.
This link looks helpful: http://www.sqlitetutorial.net/sqlite-java/select/
Based on my two methods for judging an image I created a survey. It has two parts:
The basis of the color theory has already been explained. In the survey, I address this by creating two color schemes and asking the participant to choose. One scheme rated poorly by the program, the other rated well. Color schemes are switched up randomly, so that a participant who only likes right-sided images doesn’t always choose a “good” or “bad” score.
An example of the question format:
Which color scheme do you like better, 7 or 8?
There are 5 color scheme questions, so there can never be a 50/50 end opinion. Similarly, these questions do not include the actual cellular automata to better target the color-aspect of our programs aesthetic metric.
Similar to color, participants will be asked to choose between two images. However, this time, they are cellular automata that have been de-saturated and are now absent of color. This is my attempt to exclusively test the shape aspect of my program. Each cellular automata has received a high (> 0.5) or low score(< 0.1) and been matched to an opposing automata.
An example of a question like this would be:
Which image do you prefer, 1 or 2?
This I hope will give me a better understanding of preferences in people toward certain shapes. If the “bad” images are rated better, I will know that I have to rethink how I judge small vs. large shapes.
My final metric is a combination of having different color schemes and larger shapes.
An anecdote that I was proud of during this project is when I let the program create and judge 50 images, and while I waited for it to process each images score, went through each image in the folder myself.
I personally hated each image, and was very disappointed in the batch. Except for one that stood out as having a bunch of clear little squares. I thought it was interesting.
Turns out, every image in the batch got a score below 0.5 except that one, which ended up having a score of ~0.7!
Finally, the survey is out. After a week of jumping through hoops it has officially been submitted to the Hendrix online newsletter, and posters are set to go out starting Monday.
There has been a delay of 5 days, after we has asked the HSRB department what we needed to do if we had made changes to our survey, and they responded with:
You will have to submit a HSRB Research Project modification form (available online at https://www.hendrix.edu/hsrb/) that outlines the changes you have made to the survey. Please also send the updated survey.
Our response of course, was to thoroughly follow the steps above, and proceed to fill out the paperwork ASAP while also getting the required signatures of our adviser and ourselves. This was done on the 31st and turned in physically on the 1st of January, our expected start date for releasing the survey to the public.
Our response to submitting the paperwork was as follows:
Your modifications are approved and your approval memo is attached. Please note, though, that the federal regulations on HSRB review have recently changed. Here’s how the changes affect your project:
Your approval no longer expires.
You no longer need HSRB approval to make modifications to your research, unless the modifications increase the level of risk.
Which was of course incredibly frustrating considering that none of our changes increased the risk, and also meant that our survey could have been sent out at the expected date.
This has been a week not particularly well-spent, considering the amount of time wasted filling out a form.
Database needs for the next two weeks:
Build function that will create a new datase
Delete button that will clear items currently in database for resuse
This week has been a whirlwind, so much is getting done in bits and pieces that the end of the week was spent trying to finish things up and put it all together. The first thing that has to be done, was perfecting my scoring metric for the images, with the inclusion of shapes.
Perfecting my Chi-Score
Meeting with my adviser for this project led me to reconsidering how the program scores color. Now, instead of using a confusing metric for RGB values, the program examines the Hue value of a color. This uses a wheel of the hue values, and examines how far apart colors are on this hue wheel. When 3 colors are the maximum amount of distant from each other, 120 degrees, the program will give them a perfect score.
The program automatically sorts the image into three approximate colors. If three colors are really close together, they will get a worse score. Below are examples of how the program has scored some color schemes.
As you can see, none of the color schemes seem particularly “better” or “worse” than one another, although you can see the similarities between the colors in the color schemes that ranked the lowest.
It is perfectly plausible that the opinion on different colors being better is wrong, however, I believe I will gain more insight into this when the student survey goes out.
Having different sizes is important in an image, however, I needed an exact score for this. To determine the ideal large/medium/small shape ratio, I turned toward the golden ratio.
Without addressing the asymmetry of the image, I can tell that having two large blobs, 4 medium blobs, and 3 small blobs should be considered ideal. This image does NOT contain any tiny blobs, which is something I have always disliked in images.
However, this seems to give too much precedence to image that form almost no cells whatsoever. Below we see an example of a better score that formed.
This got a score of 0.38! One of the better scores out there, despite being mostly empty!
There are two things I am going to do to address this, I am going to release the restriction on tiny blobs, and I am going to say that having 0 – 1 big blobs is actually bad, and it is only good when there is over 2 big blobs!
Since the image can have up to 10 big blobs in it by how we defined blobs being “big”, I will say that 2 – 5 big blobs is an ideal range.
From here, I came up with a score that I found much better, and it chose images that had more shapes, more going on, and were not as empty as the previous metric.
These are some of the images that scored better for size out of 35 randomly generated image. The top left had a score of 0.75 which was significantly higher than the rest that we had seen.
As we can see though, these don’t necessarily have good color schemes. It is my hope that combining these two metrics will make a new metric that will overall be more appealing.
Our database has been started, and we are still calibrating it to be easier to read/write things to before we can move on to really implementing its usage!
Next week our goal is entirely to get a usabel servey out.
Shape identification is in tip-top shape. The program can correctly identify squares, triangles, lines, and circles, including when they have been twisted in various directions.
Realistically, the tested images will not be in this sort of environment. Rather, there might not be consistent shapes that can be identified. Here are two examples from an image generated by cellular automata, coded by me, and l-systems, coded by Taylor Baer, that demonstrate what it does in more realistic scenarios.
L-System – Taylor Baer
Cellular Automata – Chantal Danyluk
When given a small piece of one of Taylor Baers L-Systems, the program responds well in identifying triangles.
However, when given a jumbled mess of cellular automata, the program tends to judge things as mostly square, and doesn’t know what to make of the rest.
Both these behaviors are consistent with what we observed above. What it means moving forward, is that as programmers we cannot blindly use the numbers presented in the GUI, but rather have to pay attention to what is going on behind the curtain. The program is judging a blob based on what it is MOST like, and contains a list of percentages of similarities between different shapes.
When developing an aesthetic measure, it is this list that should be used with care.
For a rough start to my aesthetic metric, I will say that my example Matisse painting is the height of greatness, and anything the diverges from this should be considered less than perfect.
After developing a chi squares test for this, we come out with an aesthetic metric that will rate Matisse as 1.0, and anything that divulges from it, a lower score.
Examples of how this works are shown below, where we can see that the development of smaller amounts of shapes is preferred (so long as the shapes are at least 100 pixels in size). Matisse scores perfectly.
This score is only rough right now, and represents approximately what we can do. However, it does not fully represent my hypothesis. In the future, I would like to expand it to encourage specifically the forming of larger shapes, and colors that are as different as possible. This more complex fitness test will be implemented when I have the right framework in place for it.
So far, testing images has been done separately from creating them. However, with a preliminary fitness score, it is now time to turn our attentions to how we can fit these together.
Here is my proposed plan for putting the logic from the GUI in the old program.
The first step is to get the program to auto-read files at the location they are exported to.
For each picture exported, when the score button is pressed, the program will now reread the images in. The subtle changes on the screen do not indicate much when this happens. Export causes the program to display the last file exported. The score button causes the file to read in these images, and then ends in displaying the same last image in the file. At first glance, someone might not be able to tell the program did anything. However, the colors of the image change slightly as the program adjusts them to only be three colors, so some colors disappear on screen. In addition, there are print lines that get called that make it obvious the image has been read.
Give files a score
A preliminary score was given out based on shapes from my Matisse painting. However, now I have also implemented a brief score that judges color as well. The program sorts the image into three colors, and the color score examines these three colors for differences. If the RGB values are significantly different, the colors will get a better score. If they are the same, not so much.
Below is how this actually works.
“Good” color scheme
“Bad” color schemes
Judging shapes is a WIP.
Store score in database
Needs to happen. Me and my partner are looking into creating a small database and inputting files.
Project Plan + Timeline
This semester, or project plan is more planned than a week/month basis. From previous scheduling we have set-in-stone timeline to get things done by.
January : Rough fitness test done and implemented, mutation started
February – February 14th : Survey to students goes out
March 26 – April 6 2018 : Display date in library, 16 posters 21×21 inch of our images due by then
Below is what we want to implement as soon as possible, or rather, before the start of January.