Survey and Final Metric
Based on my two methods for judging an image I created a survey. It has two parts:
The basis of the color theory has already been explained. In the survey, I address this by creating two color schemes and asking the participant to choose. One scheme rated poorly by the program, the other rated well. Color schemes are switched up randomly, so that a participant who only likes right-sided images doesn’t always choose a “good” or “bad” score.
An example of the question format:
Which color scheme do you like better, 7 or 8?
There are 5 color scheme questions, so there can never be a 50/50 end opinion. Similarly, these questions do not include the actual cellular automata to better target the color-aspect of our programs aesthetic metric.
Similar to color, participants will be asked to choose between two images. However, this time, they are cellular automata that have been de-saturated and are now absent of color. This is my attempt to exclusively test the shape aspect of my program. Each cellular automata has received a high (> 0.5) or low score(< 0.1) and been matched to an opposing automata.
An example of a question like this would be:
Which image do you prefer, 1 or 2?
This I hope will give me a better understanding of preferences in people toward certain shapes. If the “bad” images are rated better, I will know that I have to rethink how I judge small vs. large shapes.
My final metric is a combination of having different color schemes and larger shapes.
An anecdote that I was proud of during this project is when I let the program create and judge 50 images, and while I waited for it to process each images score, went through each image in the folder myself.
I personally hated each image, and was very disappointed in the batch. Except for one that stood out as having a bunch of clear little squares. I thought it was interesting.
Turns out, every image in the batch got a score below 0.5 except that one, which ended up having a score of ~0.7!
Finally, the survey is out. After a week of jumping through hoops it has officially been submitted to the Hendrix online newsletter, and posters are set to go out starting Monday.
There has been a delay of 5 days, after we has asked the HSRB department what we needed to do if we had made changes to our survey, and they responded with:
You will have to submit a HSRB Research Project modification form (available online at https://www.hendrix.edu/hsrb/) that outlines the changes you have made to the survey. Please also send the updated survey.
Our response of course, was to thoroughly follow the steps above, and proceed to fill out the paperwork ASAP while also getting the required signatures of our adviser and ourselves. This was done on the 31st and turned in physically on the 1st of January, our expected start date for releasing the survey to the public.
Our response to submitting the paperwork was as follows:
Your modifications are approved and your approval memo is attached. Please note, though, that the federal regulations on HSRB review have recently changed. Here’s how the changes affect your project:
- Your approval no longer expires.
- You no longer need HSRB approval to make modifications to your research, unless the modifications increase the level of risk.
Which was of course incredibly frustrating considering that none of our changes increased the risk, and also meant that our survey could have been sent out at the expected date.
This has been a week not particularly well-spent, considering the amount of time wasted filling out a form.
Database needs for the next two weeks:
- Send function
- Read function
- Build function that will create a new datase
- Delete button that will clear items currently in database for resuse
- Implementation of me and Taylor respective tables