Blob Analysis Continued 12/09/2017

After talking to my mentor, I have decided to rethink some things. Before I even start, I needed to sit down and re-evaluate some goals and how my code is built, as well as how the chi-square will fit into it.

Right now I have an excellent blob to matrix function, splitting the blob into a 6×6 square.

Analyzing these now needs to happen using a simplified chi-squared metric.

My plan for tackling this is as follows:

Here is the plan stan

Program – Explained

For every blob in the image (except for blobs considered tiny!! Which is not yet implemented!!!) the program will go through and give them a “score” using a simplified version of the chi-squared statistic.

The program comes up with this score by comparing two graphs of a shape, and counting every pixel off in the 6*6 grid.  The score is then given as 1/((pixels off) + 1). Where 1/1 or 1 would be considered a “perfect” score. For every pixel off, the score goes down.yes you did do this

The program will test the blobs against the shapes presented to it, currently stored as enums as LINE, SQUARE, CIRCLE, and TRIANGLE.

square 100%

For example, when given a obviously perfect square, the program will rate it as follows:

square : 1.0

line: 0.0322

circle: 0.2

triangle: 0.0588

Where the closest shape this this shape is clearly a square, since it is a “perfect” fit for it.

The program is supposed to take the highest value out of these, and count the blob as a “square” because of it.

However, the problem I am running into is as follows:

problem
The program is marking all of these as squares, potentially because they take up an odd amount of room or are otherwise imperfect. Similarly, when a piece of code is commented out for counting them as square, the program will still decide they are all circles. This is clearly not the case.

Obviously I need to address this problem. Similarly, to not lag the entire thing down, I need to make sure that the program will not do any of this on “tiny” blobs.

Bug Fixes and Perfecting the program

Thanks to my adviser, we now have a program that can fairly accurately identify squares, circles, and lines. However, it still struggles with triangles.

more functionality
Here, it correctly identifies three lines and three “circles”. However, it counts triangles, as squares, throwing off both counters.

As I have discovered, this is because I put the “expected” grid of the triangle in sideways. After an easy fix, it now counts 4 triangles, the arrow and three actual triangles.Working except on lines!

After some more testing I have figured out that it is labeling all the shapes mildly incorrectly. Here is an illustration below of the programs true functionality.

What it is doing

I theorize that to make this work better, it needs the old implementation of the definition of lines in place: where it count empty spaces, and more stringent definition of squares: where they have to have corners filled in, to obtain better functionality.

After playing with line analysis it now works much better, however, incorrectly identifying way to many object as squares is still a problem.

However, hopefully “twisting” shapes will allow them to fit into other categories of shapes without them being considered squares.

Rotating Shapes

Rotating shapes appears to be a struggle, as the shapes rotation appears imperfect at best.

Testing the implementation on a triangle yields the following results:

wrong outputs

Overdue

Everyone is going home for the Winter Break and our college is closing the dorms etc. Since we have not met our goals before the end of the semester we will be working on a whatever-we-can basis over break.

Ultimately my goal is still to fix this analysis, and get a crude aesthetic metric done. I feel like I am incredibly close, and will continue this into next week.

The blog as such will update as usual, but a formal winter break notice will be added in the future when goals have been met and work ceases.

Line Analysis 12/02/2017

For the rest of the semester I will be working on blob analysis in a final push to get a preliminary aesthetic measure done by the end of the semester.

This will start with a different way of analyzing the blobs, in which we will try to see if the blobs are a line that doesn’t just consist of horizontal and vertical. I will be working to identify curved and diagonal lines.

Identification and Counting

Not working
Our blob checker currently identifies none of these as lines, which is a problem.

Now, the program should identify concave lines as lines as well as straight ones. For this I wrote a function to split the invisible box around all of our blobs into a 4×4 box, and check whether certain conditions apply.

A rough diagram is shown of this below, explaining how this can be used to find concavity without having the check every single pixel in a line.

Diagram logic

Roughly, this interprets the line as larger pixels, right now, we are splitting them into 4×4 boxes and trying to extrapolate data about their shapes, and quickly determine and see what exactly makes a line and line.

So far, my program interprets the following lines like so:

INTERPRETTING LINES

As is seen, these appear to be fairly accurate depictions of the lines. However, this is not completely satisfactory for two reasons:

  • In the second example, the program fails to find the empty spaces in the squiggle.
  • In the third example, the program fails to find the empty spaces in the corners of the circle

This may not seem to be an issue, however, here are two examples of similar shapes that will return the same output.

Shapes not line

So how do we determine which ones are shapes, and which ones are lines?

First, I will start by updating the program to be 6×6, this will be enough to find the corners of the circle as empty, and to find the empty space in the squiggle.

Observing these three different kind of lines sorted into 6×6 squares I observe the following:

  • The concave line has 14 True values and 22 False values
  • The circle line has 16 True values and 20 False values
  • The squiggle has 13 True values and 23 False values

I hypothesize that when

18 <= False <= 30

6 <= T <= 18

Then the blob should be considered a line. Implementing this on the first example, we can see that it correctly identifies the below simple lines.

Working ! Maybe

However, some instances are still not found. Namely, any lines that start to backtrack on themselves, loop around, or generally fill up space, the program will not identify. Below are some examples illustrating how lines can get more complex, and are no longer identifiable.

To help solve this issue, I will attempt to sort these lines into a percentage. Individual blobs will be labelled as having a percentage of a line.

What Percentage is a Blob a Line?

For this, we will be using what is called a chi-squared test. We already have a good idea of what are some acceptable values for a line, but from here we need to plug these values into a chi-squared test. Below is an outline of how I will run my first chi-squared test.

Chi-Squared Testing’

1) Null hypothesis : Blob b is a line

Alternative hypothesis : Blob b is not a line

2) Choose a level of significance

We know that our values for False should lay between 18 -> 30, and our values of true should be 6 -> 18. We can turn this into our level of significance.

3) Find the critical value and the degree of freedom

Our critical value is to be determined

Our degree of freedom will be 35, since there are 36 possible outcome in a 6×6 box if you do not consider the placement of our true/false values.

4) Find test statistic

The test statistic will be :

int X = (((observed value) – (expected value))^2) / (expected value)

Where, if we have multiple observed and expected value for something, we find X again for all of them and add them together.

 

5) Plot the value on a chi-squared graph, and figure out whether blob b is a line, or what percent it is away from the expected value.

On Monday, I will go over these precise steps with my mentor, look into how to code them in a more general case, discuss the right identification for what is a line, and also for what certain shapes are.

I should be looking into triangles, squares, and circle shaped objects in the future.

The “Goodness” of Blobbing 11/18/2017

What We Need

Today, we work on implementing Ferrer’s code in terms of our own program. This includes understanding its input and output, and using or adjusting them to our needs.

Ultimately, what we need is for a preliminary score of “goodness” based on Ferrer’s code to be automatically exported in the same text file containing the rules for recreation of an image. This should be done in our own program, and shouldn’t require much extra work for the user.

We will start by understanding Ferrer’s code and what it does.

Code Review

Firstly, we look at what Ferrer’s code does.

Essentially, it clusters all the colors in an image through using k-means to sort the image into an amount of colors. This amount is determined by the user. Then, it goes through and finds the different “blobs” in the image, are clusters of the same simplified colors.

Blog Detections
This image was sorted into two colors, the current blob found is shown in red, although it found 317 total blobs.

A couple things noticed were that the user is required to input the colors the image is sorted into, the more colors allowed the more blobs found, and that the program will consider even one pixel areas as blobs.

This needs to change, firstly, we need k-means to be inside the code, and not chosen by the user, secondly, we need the program to sort our blobs into small, medium, and large for us.

Blobs

The real scary part is messing around in Ferrer’s code. Now, it is attached to our code, and as such, I begin to butcher it according to what we need.

Butchering it
This is a start to editing Ferrer’s program.

Next, we have to read the code and find out whether or not blobs hold the data needed to determine how large they are, if we can access it, and if we can edit it in any way for our purposes. Luckily, Ferrer’s code is incredibly well organized.

With some minor adjustments, I was able to make functions return the values I wanted, and write some code in the controller to do two things.

  • Have the program tell us the number of large, medium, and small blobs
  • Have the program fill in the large, medium, and small blobs different colors than red.

Visually, it was necessary to do both of these things, since I next need to answer two questions:

  • What defines blobs as large, medium and small?
  • What are “good” ranges for a picture having these?

For this, I will go back to the Matisse “Seated Ruffian” and theorize visually how many large, medium, and small shapes I think it has. This, I will use to determine how to judge the size of blobs, and from there, the “goodness” of having that many blobs.Matisse back at it again

From here I can mess around with the numbers in java to make the large, medium, and small shapes fit this new standard.

Currently, big is set to be 25,000 pixels or greater, medium is 1000 pixels or greater, small is less than 1000 pixels, and tiny is less than 100 pixels.

Our Matisse has:

  • 2 large blobs
  • 11 medium blobs
  • 21 small blobs
  • 495 tiny blobs

We could say that a image is “good” if it has  2 large shapes, 11 medium shapes, and 21 small shapes, while maintaining less than 500 tiny shapes. Anything that strays from these values should be worse.

From here we will come up with a “goodness” value for other images, based on the scale of the Matisse.

output percentages
Our example output of “fitness” or percentages based on these shapes. On Monday, I will have a more detailed discussion on how to better calculate these percentages.

For now, we will continue to try and not only identify shapes, but identify lines.

Line

bob the line blob

Lines aren’t just vertical and horizontal, they are also diagonal and most importantly, they can curve. To find this, I will have to…

  • identify individual blobs in our blob list that qualify as a line by how thin they are
  • find any diagonal neighboring “lines” and string them together to make one big “line”
  • keep track of the “lines” in a list so we can easily count them and go through them.

Blobs don’t have width or height though, rather, they consist of a LinkedHashSet of points. This very literal representation won’t do for finding lines however.

To find the approximate height and width of a blob, I will find two important points on the blob. The first point is the point with the lowest x and y value on the blob, the second important point is the point with the highest x and y value on a blob.

These are roughly illustrated below, with some code I have written to find them.

Wanted but not having
These we will use to define the height and width of a blob, which we will use to find horizontal and vertical lines.

Now that we can find a blobs height and width, how do we define a line? To do this, I made an example image, and based my definition of a line around what I felt was right. It is important to note that a lines definition is not a static number, but rather changes in comparison to the width and height of the image.

What constitutes a line

Overview

The program has been edited to :

  • Sort blobs into set sizes
  • From these sizes, come up with a crude “goodness” measure
    • This measure will need to be discussed Monday
  • Find vertical and horizontal lines
    • Small shapes are also found to be lines, these either need to be counted as part of a larger line, or removed from the list. Effective ways to do this should be discussed

After next week we will look at fixing these bugs with the ultimate goal of eventually putting together a good aesthetic measure.

Blob Coloring and HSRB Proposal 11/11/2017

This week, some interesting things happened. Firstly, our proposal got accepted and we can now move on to the next step of submitting an HSRB proposal. Both Taylor and Myself have taken a hour long online class on how to run a good study, and we are currently working together to finish questions for next year and get the proposal submitted by Tuesday.

The most interesting things however, was our discussion with another computer science professor at our college, that led us in a different direction in edge detection.

Dr. Ferrer

Dr. Ferrer is a professor at our college who has some experience with object detection in his own research. Using a robot that can take pictures, he has headed a project that analyses those images, and allows the robot to respond. As part of his research, he uses something called…

Blob Coloring and K-Means Clustering

20171110_145811
Blob coloring – given a picture is a set of pixels, create a list containing references

Instead of detecting edges, blob detection finds regions of color, and defines them by certain aspects of themselves, namely, their size, how many pixels they have, their aspect ratio, etc. These regions of color, or blobs, are literally themselves with more information. Part of finding them however, is how many blobs you expect to be there to begin with.

20171110_145814

Finding out how many blobs there are depends on how many distinct colors you expect to find in the image. Before you use blob detection, you first need to determine the distinct number of colors in the image, and then cluster these colors together. Both Dr. Ferrer, and our adviser suggested using k-means.

 Instead of rewriting code for this by hand, Dr. Ferrer offered to let us use some of his code, and then re-implement it in respects to our programs functionality.

cluster
Dr. Ferrer’s wonderful clustering program.

Using this blob detection, we should be able to find the fitness of certain aspects of our images. From there, we can work on getting our aesthetic measure done by the end of the semester.

Extra

I managed to hook up Taylor’s code to my code for exporting. Although she doesn’t have randomization, it is set up so she will be able to get it to do so and export automatically!

lsystem saves now

It was brought up that my program should be able to somewhat intelligently pick colors. Unfortunately, this is the progress towards that.

poor color attempt
A color attempt where one of the RGB values will randomly increase/decrease by a set amount, if it gets too low or too high, the color is re-randomized.

 

dumb colors4554
A color attempt at a rainbow gradient, each RGB value gradually shifts. Based on this.

HSRB

Work on HSRB progresses. Taylor is doing the dirty work of filling out forms, and I have been working on fliers. Below are a couple quick mockups.

Research Example1   Research Example2   Research Example3   Research Example4

I also edited the questions associated with the research project and wrote a debriefing for afterwards.

Goal

Our current goal is to have our aesthetic measure done by the end of the semester, and begin our evolution next semester.

Edge Detection and Image Replication 11/04/2017

This week I obtained approval for the location and turned in the required paperwork, by next week I expect a response to be able to move on and write the preliminary survey, then seek HSRB approval. Unfortunately, because of slow response times and people being out of their offices, we cannot move on to the next step and start now 😦

Edge Detection

Edge detection start

I have successfully started on edge detection, working first to import the image to the canvas for viewing, then continuing to analyze the file for how many colors it contains. Below, detect these changes in color, draw a line indicating the change, and count them.

edges working to a degree

From here, I will discuss with my adviser tomorrow how I can use this information to detect larger shapes and lines, and seek understanding there. Furthermore, I have to discuss bugs with the program for how it doesn’t work with images made outside this program.

File Grammar for Replication

Before, I was exporting images with no way to recreate them or understand what rulesets they were following. They were one-of-a-kind, unrepeatable images, even by our own program. This won’t work if our program is intended to analyze the rulesets in any way shape or form.

automata saved

Now, the images are exported with a text file of the same name, containing the pixels that were originally on, the dead/alive ruleset used to create it, the colors used in their respective order, and the iterations it went through from the beginning.

Small Changes

Small changes were also done, including adjusting a function so that line-weights could be included in.

Goals

My short term goals for this week have been met. Tomorrow I will discuss the long-term goal to accomplish by the end of the semester.

Week 1:

  • Increase program functionality
  • Start edge case program
  • Get approval for location
  • Turn in paperwork

The goals for next week are below, although I do not know if we can be expected to obtain HSRB approval by next week.

Week 2:

  • Adjust functionality as needed
  • Finish or near completion edge detection
  • Write up preliminary survey for next semester
  • Obtain HSRB approval

Goals and Increased Functionality 10/24/2017

Meeting back up with our adviser last Monday gave us some insight into what needs to happen the following weeks. In the short term, our goal is to get our programs to a stage where implementing EP can happen. In the long term, we want to implement EP.

Furthermore, this week was all about quick meetings across campus to arrange accommodations for next semesters, including arranging a place to display our work, approval to quiz the student body, and credit from the school for our research next semester. These meetings will continue into next week where we hope to get full approval for our location chosen for display.

My personal goals for the coming weeks are listed below.

Week 1:

  • Increase program functionality
  • Start edge case program
  • Get approval for location
  • Turn in paperwork

Week 2:

  • Adjust functionality as needed
  • Finish or near completion edge detection
  • Write up preliminary survey for next semester
  • Obtain HSRB approval

Below, I will go through goals written up at the beginning of this week and set for this week and discuss my implementation of them.

Cellular Automata Functionality

The first thing I spent time doing was debugging. Throughout the process of building up the program I let some small bugs slide. This included things like small jitters in the beginning of cellular automata, inflexible numbers hard-coded in, and errors in wrapping the cellular automata. I also extended the canvas to be bigger, and set up the program GUI to have more controls.

GUI Pretty

These controls extend into the code to provide new functionality. Randomize grid values, will randomize values on the grid. This is intended for cells to randomly come alive in the beginning, instead of having the user plant a seed.

Gi of cells random

Cells being randomized repeatedly, with    a 1/100 chance of coming alive.

sfsdfs

Random cells after they have become cellular automata.

Iterations will mean that the cellular automata will automatically to the number of steps input. Meaning if the user inputs five, the user will only see the 5th iteration of the current cellular automata when they press fetch.

Random cells with 5 iterations

Randomly placed cells with five iterations.

Exporting now has a new functionality, if the user wants to export a large number of cellular automata they can now export randomly generated ones to a folder created on their PC. This uses the aforementioned iterations and random cell placement.randomly generated files

For example, this was 5 “Export Number” images, with randomized seeds, with 15 “Iterations”.

Moving forward this will help us be able to auto-generate images to be analyzed by EP.

Edge Detection Design

Next week we want to start the edge detection program, so, before then we will start by creating a skeleton of a GUI for it, and attaching it to our current program.

EDGEEEY

Above is the rough draft for what the GUI might look like and do.

edge gui example

Here is the current FXML file skeleton for the GUI.

Next week I will start the work on Week 1’s goals. As well as discuss further functionality needed with my adviser, including how to record cellular rules in a way that they might be able to be repeated.