Mininook

Musings on Christianity, Politics, and Computer Science Geekery

Page 2 of 4

Hardness and Political Choices

Right image: 2012 Election Results (1), Left image: Hardest Places to Live in the US (2).

 

A few weeks ago, the New York Times posted a great article on the hardest places to live in the United States, based on education, median income, unemployment rate, disability rate, and a few other factors.  It is an incredible article, and I recommend reading it at nytimes.com.  As soon as I saw their graphic, I immediately wondered if there was a connection between political persuasion and hardness.  To look at this, I grabbed Mark Newman's version of the 2012 election results and a nice image comparator so that the two maps can be compared on top of each other.  The results are interesting!

For clarification: On the election results (left), each district is colored on a gradient from blue to red based on percentage of the vote for the winning candidate (purple would mean an even split Obama/Romney).  For the NYTimes hardness results (right), dzisctrics are colored on a gradient orange to green, where orange is worse (harder to live) and green is better.

What to make?  I don't know.  In the north-east (areas including New England, Kentucky, Michigan, Illinois, and parts of Virginia), it appears that the more liberal areas are usually the easier places to live, and the harder places to live are usually more conservative.  However, in the mid-west (the entire middle of the country west to California), it appears to be just the opposite.  In any case, it's interesting to think about!

References

1.  Newman, Mark. "Maps of the 2012 US Presidential Election Results." N.p., 8 Nov. 2012. Web. 14 Oct. 2014. <http://www-personal.umich.edu/~mejn/election/2012/>.

2.  Flippen, Alan. "Where Are the Hardest Places to Live in the U.S.?" The New York Times. The New York Times, 25 June 2014. Web. 14 Oct. 2014. <http://www.nytimes.com/2014/06/26/upshot/where-are-the-hardest-places-to-live-in-the-us.html>

VI Tricks

I may be stuck in the past, or like punishment, but my editor of choice is still VIM.  However, certain tricks seem to be hard to find on Google searches, so I'm going to compile them here:

  • Creating custom commands and keyboard mappings are easy in VIM.  To create a custom command, list the command in the .vimrc file.  The % character includes the current buffer's filename in the shell command.
    command CommandName execute "!shellcommand %"
    This command can be run in VIM using the standard :CommandName convention. To map this new command to a keyboard shortcut, use the map command in the .vimrc file.
    map <F5> :CommandName<CR>

Command Line Tricks

So, I always am using some command line shortcuts to do various tasks, and often have to look up the tricks every time I need to do something remotely fancy.  Here are some of my most-used helpful hints:

  • To remove the leading spaces and tabs from each line of text on standard in (so use with a pipe for the input), this sed command will work well:
    sed -e 's/^[ \t]*//'
  • Reformatting XML/HTML files so that line returns inside tags are removed:
    xmllint --format --noblanks infile.xml > outfile.xml

Boots: New Machine Learning Approaches to Modeling Dynamical Systems

Large streams of data, mostly unlabeled.

Machine learning approach to fit models to data. How does it work? Take the raw data, hypothesize a model, use a learning algorithm to get the model parameters to match the data.

What makes a good machine learning algorithm?

  • Performance guarantees: \theta \approx \theta^* (statistical consistency and finite sample bounds)
  • Real-world sensors, data, resources (high-dimensional, large-scale, ...)

For many types of dynamical systems, learning is provably intractable. You must choose the right class of model, or else all bets are off!

Look into:

  • Spectral Learning approaches to machine learning

Basener: Topological and Bayesian Methods in Data Science

  • Topology: Encompasses the global shape of the data, and the relations between data points or groups within the global structure
    • Google Pagerank Algorithm
    • Example: Cosmic Crystallography
      • Torus universe (zero curvature)
      • Spherical universe (positive curvature)
      • Other universe (negative curvature)
  • Data: Hyperspectral Imagery
  • Gradient Flow Algorithm
    • identify neighbor with highest density for each data point (arrow points from that point to that particular neighbor)
      • gives a data field
    • follow the arrows to identify clusters

people.rit.edu/wfbsma/data/NINJA_MAIN_self_test_refl_RX.img.html

 

« Older posts Newer posts »

© 2020 Mininook

Theme by Anders NorenUp ↑