I am now officially in a major crunch, as I attempt to bring this project up to the high standards I expect of myself.  Though I’ve hit some rough patches in the past weeks, I’ve also made some significant progress.  Check it out.

What I’m most proud of is the way that Processing, JavaScript, and now the APIs are beginning to all work together.  Users can now click on a circle to get information about the represented legislator, along with their latest tweet.  Their twitter ID is pulled from Sunlight Labs Congress API.  As you can probably tell, it’s buggy right now.  Only after clicking around a few times will the tweets update, and their currently a few clicks behind where they should be, but the basic premise is working.  Synchronization is a big goal for this week.

My other major goal for this week is to setup my filtering options.   The overview, though useful, is pretty crowded, and the user will have to have many options for filtering in order to make this tool more usable.

Stay tuned, because this project will be changing rapidly.

Advertisements

I’m still behind where I’d like to be, but you can check out my progress here.

This week I added collision to the legislators.  They will slowly push away from each other if overlapping, starting from the scatter plot (political spectrum on the x, leader-follower score on the y).  Soon the y-axis will be replaced with the Media Quotient (MQ), which I will explain later.  I will also soon be tracking the connections between legislators through co-sponsorship, which will be displayed as tendrils between legislators and will also apply some light force to each other (creating natural clusters of co-sponsors and thereby, political factions).  I’m still working on implementing the real-time feeds that will be available for each legislator (and how to display these feeds – I’m running out of pixels already!).

I also reworked the aesthetic, removing the alpha from the legislators.  This may change later, or may be used to help highlight the user’s “focus legislator”.

Yeah, this is a bad news / good news post.

First: Progress.

Second, bad news.  I spent most of yesterday attempting to get JavaScript and XML to play nice together.  I was hoping to streamline my back-end this way but hit a roadblock when attempting to get the variables and arrays constructed from my XML pulls (via AJAX) to integrate into my Processing program.  Due to using AJAX, I decided to just include jQuery, since I’m already somewhat familiar with it.  But, in the end, I was unable to get past the Processing road block.  I will probably continue to work on this in the future, but until I find a solution, I will continue with my PHP+SQL format.

Finally, good news.  I’ve finally decided on a direction for my project (as long as it’s approved by my advisor).  I hint at my final form with my current progress.  I’ve decided to abandon historical data in favor of real-time data, in no small part due to the recent release of the Real-Time Congress API from Sunlight Labs.   The visualization will now be an attempt to display the “political-media” zeitgeist, plotting legislators along a sort of scatter-plot with the political spectrum along the x-axis and derived “media quotient” along the y-axis.  I’ll talk more about this media quotient later.  The size of each legislator’s radius will be determined by their “political capital”, derived from the number of bills they are sponsoring.  In this way, a viewer can see a real-time view of the political-media landscape.

The second portion of this application, underneath the visualization, will be links and feeds to the selected legislators various media mouthpieces – their C-Span, Twitter, and YouTube feeds, for instance.  I also hope to include recent news stories related to them.

Perhaps the aspect I’m most excited about, however, is including some basic physics into the visualization.  The legislators will then be bumping into each other, crowding out each other’s space within this political-media landscape.  Furthermore, I hope to, when a legislator is selected, release “tendrils” from that legislators which will connect to other legislators based on their co-sponsorship of bills (info available from the Real Time API), and possible if they are mentioned in another persons speech (AND possibly also representing committee relationships).  Whether or not these tendrils will also have a basic physics… well, I’d like to.  Here’s hoping that there’s time.

In this way I think my visualization will both be a useful tool for journalists and political junkies to get a real-time, aggregated view of the political-media landscape.  At the same time, I hope it will serve in some ways as a criticism of the importance media now plays in political power.  More on this later.

Been extremely busy finishing up some work on my research assistantship, but soon that work will be out of the way, freeing up more time for this project.

That beings said, I spent a fair amount of time last week turning some of GovTrack’s XML files into CSVs and uploading them to my database.  I then spent some time making some form fields on my page interact with my viz.  However, eventually I plan on keeping all of the UI elements on the Processing canvas if possible.  Check it out here.

The scatter plot puts the spectrum data from GovTrack on the x-axis and the leader-follower stats (also from govTrack) on the y-axis.  I also turned all the female legislators into circles.  I also realize there are likely some problems when viewing this in Safari, but compatibility issues are low on my priority right now.

Sunlight Labs released their Real Time Congress API last week, and I hope to integrate that into a real-time mode that will include Twitter, YouTube, and news feeds.  I’ve also decided to spend some time figuring how to circumvent using PHP and mySQL, relying instead on just a combination of JavaScript and XML (and API feeds).  I feel I’m wasting too much time converting data using the SQL & PHP format and can speed up the entire process by avoiding this.

Did a little work this evening adding animation to the project.  Again, check it out at here.   Press any key to watch the data points animate between two states: a basic overview state and a scatter plot state.  All the data is still placeholder data, and the rollover text at the top is mainly just for testing purposes.  In any case, I’ve got the animation working and now will be able to move the data points between states rather elegantly.  Of course, animation lends itself to historical data, since the passage of time maps to, well, the passage of time quite well.  GovTrack will be my primary source for historical data, but I’m still looking for others.

I spent the another few hours this evening poking through data.  The source data at GovTrack is proving invaluable, and I will rely heavily on this source.  I still have to figure out what to do with the XML format though… either I’m going to change from accessing SQL to using these XML files, or I will convert all the XML files I use into CSVs and import them to my database.  GovTrack DOES provide political spectrum data, which is a huge boon, though it only goes back to 100th congress.   It also contains a “leader-follower” score, a derived statistic based upon bill sponsorship.  More info on the methodology for these statistics here.

I’ve all but abandoned the idea of using the Cook PVI scores to track political spectrum data, because I would have to purchase a very expensive subscription in order to access this data.   I spent more time poking through lobbying data, but unfortunately it’s hard to draw connections between this information and individual legislators.  I’m still considering the earmark data because it’s relatively easy to tie to individual legislators, and it has some great numerical data points (and it goes back a few years).

My main issue still remains: What to focus on?  What story will I tell?  Earmarks might be the way to go…

A good breakthrough today.  So I didn’t expect to tie the front-end and back-end together until later, but I actually managed to get it up and running without too much grief.  Most of problems I had were simple syntax errors (missing semicolons and such), though turning the the array I pulled from SQL via PHP into a javascript array gave me some difficulty, until I realized I was just over-thinking things.  Here’s the code:

echo ‘<script type = “text/javascript”>’;
echo ‘var firstnames = new Array(“‘, join($firstnames,'”,”‘),'”);’;
echo ‘</script>’;

That’s it!  Then, because Processing.js can use global JS variables, I can easily populate the various variables within my Legislator class with their corresponding data arrays.

I have a very simple demo up at http://lcc.gatech.edu/~tgibes3/ccp_test/ccp_sqltest.php.  All it does is display the name of the moused-over legislator, but it demonstrates that my front-end and back-end setup will work.   Whew!

With this out of the way, I can spend next week working on more layout ideas, with the added piece-of-mind that whatever data I use can easily be placed into my design.

A note on the back-end: It was fairly simple to setup the database – just uploaded a CSV!  I’ll be able to add/remove fields as needed, with almost no limitations to the amount of data I use.

I spent a few hours last night confirming that it is indeed fairly easy to integrate JavaScript elements into a Processing program using Processing.js.  I was able to pull info from an API using PHP and, turn it into a global JS variable, which Processing can then access, no problem!

I then spent a few more hours poking through more data and testing out some APIs.  One thing I realized is that it’s impractical for me to be pulling all my data from an API.  The only case in which I might use an API is if I attempt to create real-time element to my viz, in which case, I’ll use Sunlight’s new Real Time Congress API.

As for the my other data sources, I’ve decided to pull the dumps from Transparency Data and Sunlight and then modify those CSV files to suit my needs.   I will then put those into database, most likely accessing the data from SQL with PHP, transforming it into JS and then plugging it into my Processing code.  As for historical data, if I decide to go that route, it appears I have no choice but to use GovTrack.us, but it’s a huge amount (16Gb!) of XML data, and I will probably try to avoid this if possible.

On nice thing about this approach is that I will be able to utilize a “two-pronged attack” going forward.  I will spend half my time working on getting the back-end setup, messaging data, etc.  The other half of my time will be spent working on the front-end interface, utilizing place-holder data.  Then, sometime soon, I will have the front-end and back-end “meet in the middle.”   After tying the front- and back-end together, I can move on to testing, evaluation, and refinement.

%d bloggers like this: