Project Description: Chicago Crime

My final project is a highly in-depth analysis of crime in Chicago. The study focuses primarily upon the last eleven years of available crime data and emphasizes socioeconomic trends and correlations within the field of “Deceptive Practices.” The project is subdivided as follows:

Part I: “The Residential and Commercial Components of Deceptive Practices.”

Statement of Intent: After the fall of Enron and other multinational corporations in 2001, a major crackdown on corporate and commercial fraud was instituted under the Bush administration and the Security and Exchange Commission [SEC]. Since many corporate and commercial entities reside in Chicago, the first stage of this project maps fraud and other crimes of deception within both the residential and commercial realm. It is hypothesized that since the 2001 ordinance, individuals may be more discouraged from committing fraud within a commercial setting, and – therefore – will be more likely to commit these acts at home. This initial section deconstructs all 150,000 acts of fraudulent crimes and examines each in graph and map form on an annual basis.

Process: Data was imported from the database into Quantum GIS where it was converted into heat maps so as to show which wards engaged in the largest number of deceptive practices per year in both a commercial and residential setting. An additional heat map for each year was produced to show which wards had the greatest ratios of residential to commercial-based fraud by simply dividing the former total by the latter for each ward and re-mapping. After totals for each ward were uncovered using the “points in polygon” query in QGIS (an exhausting process considering the large amount of data), the totals were re-entered into a spreadsheet in order to graph the distributions. Lastly, three graphs showing the overall distribution of deceptive practices for all years were created in order to draw conclusions regarding the distribution.

Results: The final graphs are quite interesting. Though the initial study is somewhat skewed since data from the preceding decade was unavailable for comparison, the final graphs suggest a large decrease in residential and commercial based fraud in wards 1-10 after 2001. Similarly, a close look at the graphs reveals a slight but rather steady rise in deceptive practices committed in a residential setting, as well as a more exponential decrease in those committed in a commercial setting. Similarly, the ratio of commercial to residential-based fraud in almost every ward appears to have increased greatly since 2001, suggesting a reallocation in several wards of where these crimes are committed.


Part II: “The Sociocultural Correlations of Deceptive Practices.”

Statement of Intent: The original intent of the second portion of the project was to investigate various sociocultural components of Chicago and see if there was any correlation between where they occur and where deceptive practices occur. In terms of selecting these components, it was very important that they relate, in some way, to the pathology of why one commits crime. Therefore, this next series of maps investigates the link between a ward’s level of fraud and its median home-value, its adjacency to educational facilities, as well as its rate of population increase/decrease since 2001.

Process: Heat maps of all three components were generated and, once again, each ward’s data was extracted from QGIS’s spatial query functions into spreadsheet format for graphing. The graphs in these sections are especially important since they were used to determine the coefficient of correlation for each query. This was done by creating a graph of the D.P.’s occurring in each ward and superimposing the corresponding graph (whether it was schools per ward or home value per ward) on top of it. By doing so, I was able to get fifty x-points [the DP’s per ward] and fifty y-points [the compared value per ward] from each graph, and input them into a correlation calculator. By doing so, a separate coefficient of correlation for each relationship-query was produced.

Results: While the previous section’s results confirmed their initial hypothesis, the coefficients of correlation were quite low in this section. The highest positive correlation was over .25, suggesting only a moderately affirmative correlation. Despite this, however, the comparison between schools and D.P.’s per ward resulted in a rather strong negative correlation of -.456, suggesting that a lower number of schools in a ward corresponds with a higher number of deceptive crimes within the same boundary.


Part III: “The Urban & Exurban divide in Chicago Crimes”

Statement of Intent: Whereas the previous sections examined deceptive practices in close detail, this last area of crime analysis examined all Chicago-based crimes from 2001-2008 on a formal and graphical level. While, at first glance, the largest appropriation of crime typically occurs in the downtown area, this negates the effects that each ward’s population has on producing the distribution. For example, while it may appear as though there are a large number of crimes occurring in one ward, it may be because the population in that ward is so much higher. I was, therefore, determined to examine whether most crimes actually do occur in greater number in the downtown area or in the suburban/exurban regions of Cook County by re-contextualizing the total crimes per ward.

Process: In order to visualize the distributions of each crime, maps were generated in a very recursive manner showing the point-based assortment of crime across the region in two-year intervals. Spatial queries in QGIS where subsequently executed to determine the number of crimes in each ward for every two-year interval.  After that, the total number of urban/downtown-based crimes was divided by the total downtown population, and, conversely, the total number of exurban/suburban crimes divided by the total exurban/suburban population. By undergoing this extremely lengthy process, these ratios were able to tell whether each crime occurred more in an urban or exurban setting with regards to the population density. These ratios were subsequently graphed in order to show the change over time, and a very simple heat map for each crime was generated to show the wards in which the corresponding crime occurred the most throughout the eight year interval.


XML in Processing

I’ve found a couple of usefull tools for importing URL based XML into Processing, and then querying/storing on a PostgreSQL DB.

SQLibrary is for getting Processing to communicate with a DB, and works with more than just PostgreSQL.

XMLElement is a Processing object that can be used to access URL based XML code, and to parse it.  It acts as an XML representation on top of Processing.

UCLA’s Digital Roman Forum

In continuation with my posts on digital Roman  reconstructions, this is an effort undertaken by the UCLA Cultural Virtual Reality Laboratory Department to digitally reconstruct the Roman Forum in Level 3/Level 4 Complexity. While the model was initially undertaken between 1997 and 2003, the lab was able to create an online portal into the model from which scholars can both contribute information regarding the architecture/history as well navigate through the model to better understand their own work. As the creators write: “The purposes of this site are to use the Internet to permit free use and easy viewing of the digital model by people all over the world; to provide documentation for the archaeological evidence and theories utilized to create the model; and to offer basic information about the individual features comprising the digital model so that their history and cultural context can be readily understood.”

The date upon which this model is based is June 21, 400 A.D. (It is not clear whether efforts are being made to create a time-based slider similar to GoogleMaps). In the portal, one can browse through different layers of the model based upon Primary Source (such as Livy, Tacitus, or Vitruvius), Function (such as honorary, religious, or legislative structures), and by architectural type (such as basilica, column, or house).

The portal also lists a table of geo-spatial data from which users can extract the coordinates and shape files of  each building in the Forum. The website to the overall project can be found here:




The Digital Pompeii Project

In addition to the digital version of Rome that I found earlier, I was curious to see if any attempts had been made to create a digital model of the ancient city of Pompeii. As it turns out, The University of Arkansas’s Fulbright College of Arts and Sciences has been working on an initiative to digitally model Pompeii as a joint venture with the college’s Department of Humanities, the Classical Studies Program, and the Center for Advanced Spatial Technology. According to the creators: “Its goal is the creation of a comprehensive database for visual art and material culture at Pompeii.”

Amazingly, the group is also creating a digital database in which to store archaeological evidence of visual art recovered from the remains of the city. According to the creators: “We are beginning with the Italian reference workPompei: pitture e mosaici (PPM), scanning all of its plates and entering data for location, style, color, materials, workshop, etc., together with searchable descriptions of paintings and mosaics.” Essentially, the database will be searchable by space, so one can investigate a building and subsequently learn about the artifacts uncovered at that particular location. While based on PPM, however, the database is intended to be scalable, and to incorporate other kinds of data found in the numerous archaeological expeditions throughout the city. The database will be integrated with a navigable 3D model of Pompeii, so that one can “walk” through the recreated city in real time. This will allow for any painting or mosaic (or even wider search results, e.g. “all paintings of” a particular artist) to be viewed and explored in their spatial context, in relation to the surrounding decorations and not simply as a separate pop-up window.

At the moment, the creators have posted online demo models of the House of the Vettii (Regio 6, Insula 15, doorway 1) and the House of the Prince of Naples (Regio 6, Insula 15, doorways 7 and 8) as examples of what is to come in the future. The demos can be viewed here:

The hope for the digital model is for it to be integrated into both humanities and archaeological classes at the university so that students can explore it in real time as it is continually updated and expanded. Like Wikipedia or Google 3D Warehouse, the database can be contributed to by independent users with models of varying scales of complexity. The database is organized specifically by region and subdivided by doorway (the Roman equivalent of an address). The project’s overall website can be found here:



Tracking Diseases using GoogleMaps on your Smart Phone

This was an interesting news item I came across regarding new methods for tracking diseases via GoogleMaps. The physical application was inspired by the need to instantly check for disease or medical alteration using rapid diagnostic testing (such as testing for pregnancy, high glucose levels, strep throat, blood pressure, etc). Researchers and individuals complain that using such tests (which often require rather tedious applications) are highly prone to human error. Therefore, researchers at the University of California: Los Angeles (UCLA) have developed a physical plugin for one’s phone to assist in one’s rapid diagnostic test via smartphone technology.

According to Elizabeth Moore: “The researchers say the attachment can read almost every type of RDT available; all the user does is insert the RDT strip into the attachment, which is then converted into a digital image via the phone’s built-in camera. An app then determines two things: whether the digital RTD is valid and whether the results are positive or negative. But the team didn’t stop there. They have the reader transmit these results wirelessly to a server for processing, storage, and mapping via Google Maps to track the spread of specific conditions and diseases globally over time.”

What is potentially much more interesting – and certainly much more important on a social and scientific level – is that this will give researchers an improved ability to track diseases and other health trends via Google Maps. The physical application transmits a reading to a database upon initial recognition of one’s test sample after which it is plotted alongside the millions of other contributors as point data. It will also help scientists to understand potential and cause-effect relationships at a much larger scale for combating infectious diseases in the future. If the devices are cost effective and able to be provided on the general market, the public will be granted access to disease trends

To read more about the development visit:

Rome Reborn

This is an interesting thing I saw in a Yale Lecture on Roman Architecture regarding the digital reconstruction of Ancient Rome. Similar to Google’s attempt to bring the information from the Sanborn Fire Maps back to life, this urban-scale model of Rome features digital recreations of all the ancient buildings comprising the Seven Hills of Rome. Officially entitled “Rome Reborn,” the initiative was started at the University of Virginia in 1997 and allows users the same functionality of GoogleMaps.

The coolest thing about the project is the ability to pan back and forth through different time periods and have the buildings reconfigure themselves based upon the chosen period. The platform allows users to experience the city from its first settlement in the late Bronze Age (ca. 1000 B.C.) to its depopulation in the early Middle Ages (ca. A.D. 550).    In addition to the accuracy of the structures and their corresponding footprints, designers had to be conscious of the surrounding topography of the landscape and estimate its changes over time.

Though released under UVA, the project was a collaboration between the University’s History and Architecture departments, the UCLA Experiential Technology Center (ETC), the Reverse Engineering (INDACO) Lab at the Politecnico di Milano, the Ausonius Institute of the CNRS, the University of Bordeaux-3, and the University of Caen. According to the developers: “A secondary, but important, goal was to create the cyberinfrastructure whereby the model could be updated, corrected, and augmented. Spatialization and presentation involve two related forms of communication: (1) the knowledge we have about the city has been used to reconstruct digitally how its topography, urban infrastructure (streets, bridges, aqueducts, walls, etc.), and individual buildings and monuments might have looked; and (2) whenever possible, the sources of archaeological information or speculative reasoning behind the digital reconstructions, as well as valuable online resources for understanding the sites of ancient Rome, have been made available to users. The model is thus a representation of the state of our knowledge (and, implicitly, of our ignorance) about the urban topography of ancient Rome at various periods of time.”

The models also represent different levels of complexity. For lesser known structures, the models are level 1 with an applied texture on a simple mass. Of course, better documented structures are level two with articulation of the roof plane. The most famous landmarks (such as the Circus Maximus and the Colosseum) are rendered in level three complexity (some even level four complexity).

The project website can be found here:

And an article and video about the project can be found here: