Archives for category: Design

Hey y’all (currently in Atlanta, y’know?),

So the impression is that this blog is done and gone, and my work on the Clear Congress Project is also finished.  Not so! I’ve made a number of improvements over the past 2 months. I also want to explicitly outline some of the features I hope to implement soon!

Features Added

First I’ll talk about the passive interface elements I’ve added.  The most important is the legend on the right hand side, which provides some immediate explanation.  I also think it’s important to include some simple initial directions to the user, since it may be hard to determine that the scatter plot can be interacted with.  I will likely change the cursor CSS for the entire canvas to imply more interactivity.  In addition to this, I also added some middle lines across the chart to create quadrants.  I will likely add the option to add/remove these. In addition, I changed the background color to black.  I think it makes the details window pop more and makes the graphic a bit more dramatic.  I want to give the user the ability to change between black and white, and also provide a color-blind viewing option, which affects around 2% of people and almost 8% of all men.

On the interactive side of things, I implemented a few viewing options, such as a jitter/reset option, as well as the ability to show/hide labels and the network graph. I’m still having some performance issues when collision is enabled, particularly with Firefox. I also added the ability to capture an image of the current state of the graphic. I felt that it was necessary to add a time element at the top of the canvas to automatically place each captured image in a temporal context.  Currently it uses the user’s computer’s time, but I will probably make it standard Eastern time eventually.  I haven’t implemented any new filtering options yet, but that leads me into the next section

What’s To Come

First, let’s talk filters. I plan on cleaning up the interface, making each element buttons instead of form checkboxes.  This will be my first big change.  Then I plan on adding more filters. Lots more. So many I’ll need to divide them up accordion style. First I want to add some flexible sliding-bar filters for the derived attributes: the partisanship score and the leader-follower score.  I also want to add some sliding bars for experience in years as a legislator and for age. I’d also love to add income or wealth at some point, but that will require implementation of a new API, so this is likely a long-term goal. Finally, I’d like the ability to filter out all but those connected to the current revealed network.

Now, the largest feature I HAVE to implement is the ability to view changes through time. As one of the few people who check the view on a daily basis, the evolution over the past few months has been astonishing. Basically, the Republicans legislative stonewalling has forced the entire House more and more to the right, with a large number of Democrats now crossing the center partisanship line, some dramatically so. Being able to view these changes fluidly over time will have an incredible impact on the strength of the application, while at the same time creating a complete 365 image/year archive! Yes, I’m excited about this one. You should be too!  I hope to complete this by the end of the summer, maybe sooner if I get someone to help me out!

Finally

I plan on blogging regularly starting today, likely linking an image from Clear Congress Project to a something I’ve read or some relevant news story.  Just a head’s up.

Well, it’s one month later, and I’ve finished strong.  The defense of the project went well, and I got some great feedback.  Carl DiSalvo considered my ideology methodology a good first pass and suggested expanding upon it.  I hope to do this in the future, but part of the problem was that the ideology axis was actually more a measure of partisanship.  So I’ve changed that axis to partisanship.  I also changed my methodology for determining partisanship slightly but will discuss this in the soon to come Methodology section.

I’ve added a lot more viewing options, the ability to show or hide the network.  But I’ve realized that I need to reconsider the collision algorithm or just abandon it all together.  It just causes too large of performance hit, especially if you’re also drawing an extensive network and labels.  I’m going to instead consider a “jitter” function, which wound add some noise to each circle’s location with each button press.

But overall, the project has a very solid base.  In the next week, I will be migrating the project to it’s new home at clearcongressproject.com.  Posting might be limited this week, but look for the Methodology section and other updates to the structure of the blog.

A final thanks to my advisors on the project, Ian Bogost, Carl DiSalvo and Jannet Murray.  My experience in the DM program has been life altering and was good to have access to such great minds throughout this sometimes rocky process.

I’m still behind where I’d like to be, but you can check out my progress here.

This week I added collision to the legislators.  They will slowly push away from each other if overlapping, starting from the scatter plot (political spectrum on the x, leader-follower score on the y).  Soon the y-axis will be replaced with the Media Quotient (MQ), which I will explain later.  I will also soon be tracking the connections between legislators through co-sponsorship, which will be displayed as tendrils between legislators and will also apply some light force to each other (creating natural clusters of co-sponsors and thereby, political factions).  I’m still working on implementing the real-time feeds that will be available for each legislator (and how to display these feeds – I’m running out of pixels already!).

I also reworked the aesthetic, removing the alpha from the legislators.  This may change later, or may be used to help highlight the user’s “focus legislator”.

Yeah, this is a bad news / good news post.

First: Progress.

Second, bad news.  I spent most of yesterday attempting to get JavaScript and XML to play nice together.  I was hoping to streamline my back-end this way but hit a roadblock when attempting to get the variables and arrays constructed from my XML pulls (via AJAX) to integrate into my Processing program.  Due to using AJAX, I decided to just include jQuery, since I’m already somewhat familiar with it.  But, in the end, I was unable to get past the Processing road block.  I will probably continue to work on this in the future, but until I find a solution, I will continue with my PHP+SQL format.

Finally, good news.  I’ve finally decided on a direction for my project (as long as it’s approved by my advisor).  I hint at my final form with my current progress.  I’ve decided to abandon historical data in favor of real-time data, in no small part due to the recent release of the Real-Time Congress API from Sunlight Labs.   The visualization will now be an attempt to display the “political-media” zeitgeist, plotting legislators along a sort of scatter-plot with the political spectrum along the x-axis and derived “media quotient” along the y-axis.  I’ll talk more about this media quotient later.  The size of each legislator’s radius will be determined by their “political capital”, derived from the number of bills they are sponsoring.  In this way, a viewer can see a real-time view of the political-media landscape.

The second portion of this application, underneath the visualization, will be links and feeds to the selected legislators various media mouthpieces – their C-Span, Twitter, and YouTube feeds, for instance.  I also hope to include recent news stories related to them.

Perhaps the aspect I’m most excited about, however, is including some basic physics into the visualization.  The legislators will then be bumping into each other, crowding out each other’s space within this political-media landscape.  Furthermore, I hope to, when a legislator is selected, release “tendrils” from that legislators which will connect to other legislators based on their co-sponsorship of bills (info available from the Real Time API), and possible if they are mentioned in another persons speech (AND possibly also representing committee relationships).  Whether or not these tendrils will also have a basic physics… well, I’d like to.  Here’s hoping that there’s time.

In this way I think my visualization will both be a useful tool for journalists and political junkies to get a real-time, aggregated view of the political-media landscape.  At the same time, I hope it will serve in some ways as a criticism of the importance media now plays in political power.  More on this later.

Did a little work this evening adding animation to the project.  Again, check it out at here.   Press any key to watch the data points animate between two states: a basic overview state and a scatter plot state.  All the data is still placeholder data, and the rollover text at the top is mainly just for testing purposes.  In any case, I’ve got the animation working and now will be able to move the data points between states rather elegantly.  Of course, animation lends itself to historical data, since the passage of time maps to, well, the passage of time quite well.  GovTrack will be my primary source for historical data, but I’m still looking for others.

I spent the another few hours this evening poking through data.  The source data at GovTrack is proving invaluable, and I will rely heavily on this source.  I still have to figure out what to do with the XML format though… either I’m going to change from accessing SQL to using these XML files, or I will convert all the XML files I use into CSVs and import them to my database.  GovTrack DOES provide political spectrum data, which is a huge boon, though it only goes back to 100th congress.   It also contains a “leader-follower” score, a derived statistic based upon bill sponsorship.  More info on the methodology for these statistics here.

I’ve all but abandoned the idea of using the Cook PVI scores to track political spectrum data, because I would have to purchase a very expensive subscription in order to access this data.   I spent more time poking through lobbying data, but unfortunately it’s hard to draw connections between this information and individual legislators.  I’m still considering the earmark data because it’s relatively easy to tie to individual legislators, and it has some great numerical data points (and it goes back a few years).

My main issue still remains: What to focus on?  What story will I tell?  Earmarks might be the way to go…

To get a handle on the amount of data points I’m dealing with, I did some sketching with Processing.  Here are some of the pics.  The main things I realized is that with 535 data points, screen space will be at a premium.  I also have to be careful to use icons that don’t get too muddled en mass or cause weird optical effects (you’ll see some below).  I haven’t played with transparency yet, but I’m sure that will come into play, especially if I end up using some sort of network view, were the data points may end up overlapping on screen.

A few notes.  As you can tell, I’m definitely leaning towards stacked squares as the default visual form.  I tried everything I could think of to make the circles work, but given the number of data points, the circles just will not play, mainly because the space between dots causes a distracting optical effect.  I also like the stacked boxes for creating a a series of vertical histograms, a great way to display multivariate data, especially when much of it is nominal.  However, I’m still hoping to have a “scatter” and “network” mode to allow for some other viewing methods.

I’m still struggling with the issue of a topic focus.  As I look through my data sets, I’m finding myself spending a lot of time examining lobbying and campaign contribution numbers.  This is mostly due to the wonderful datasets provided by OpenSecrets.org.  Sunlight Labs has leveraged this dataset to create Influence Explorer.  While this is helpful in finding detailed information, it fails to provide a good overview of the entire organization/individual-lobbying-representative network.  It is more about searching than browsing (the main way to access data is through a search box).

As Prof. Stasko drilled into us again today in my Info Viz class, “Overview first, zoom and filter, then details-on-demand” (Shcniedermann).  Influence Explorer goes straight to the details-on-demand step.  My viz might focus on the same dataset as Influence Explorer but will distinguish itself by including these “overview” and “zoom and filter” steps.  This will create context and the ability for broader pattern discovery.

It’s becoming increasing clear to me that my data sources will provide me with my analytic questions (which hopefully the viz will help answer).  This seems more natural than coming up with a topic and then looking for datasets to answer those questions.

I can’t waste anymore time coming up with a concrete analytic question.  Moving forward, I will spend part of my time dealing with these issues but a bulk of my time will be spent with development.  I must start building!  I’m hoping as I start connecting my data source to the interface, my focus will start to take shape.

%d bloggers like this: