Google Maps Twitter Mashup

Google Maps
The map data you see inside Google Maps comes from multiple sources, including user-provided data, but is primarily compiled by Tele Atlas, a global leader in navigation and location-based services. For our mashup we will be combining Google Maps with Twitter. Therefore all of the map data will be called from Google’s database. We will not be downloading any GIS data, it is all essentially in the cloud. The specific sources of the map data my be found in the lower right corner of the map.

Add Google Maps to your website
In order to add the full interactivity of google maps to your website you need to obtain an API Key. Most providers of this kind of data or service require a key in order to control who uses the data, how, and how much. This is the limitation of using someone else’s database versus using your own. To create your API key:

1. Visit the APIs Console at https://code.google.com/apis/console and log in with your Google Account.
2. Click the Services link from the left-hand menu.
3. Activate the Google Maps API v3 service.
4. Click the API Access link from the left-hand menu. Your API key is available from the API Access page, in the Simple API Access section. Maps API applications use the Key for browser apps.

Google provides a list of all of the commands you can implement into your map to accommodate usability.

This is what the code behind the placed map looks like.

<!DOCTYPE html>
<html>
  <head>
    <meta name="viewport" content="initial-scale=1.0, user-scalable=no" />
    <style type="text/css">
      html { height: 100% }
      body { height: 100%; margin: 0; padding: 0 }
      #map_canvas { height: 100% }
    </style>
    <script type="text/javascript"
      src="http://maps.googleapis.com/maps/api/js?key=YOUR_API_KEY&sensor=SET_TO_TRUE_OR_FALSE">
    </script>
    <script type="text/javascript">
      function initialize() {
        var myOptions = {
          center: new google.maps.LatLng(-34.397, 150.644),
          zoom: 8,
          mapTypeId: google.maps.MapTypeId.ROADMAP
        };
        var map = new google.maps.Map(document.getElementById("map_canvas"),
            myOptions);
      }
    </script>
  </head>
  <body onload="initialize()">
    <div id="map_canvas" style="width:100%; height:100%"></div>
  </body>
</html>

Notice the lat, long reference. This particular example is centered on Sydney, Australia. The zoom function is self explanatory and is one of the functions listed above. Now that we have the base map it’s time to implement the twitter feeds.

Accessing the Twitter API
http://search.twitter.com/search.json?rpp=25&geocode=41.881256,-87.62485,20mi&callback=?
Broken down:
q=NATOChicago           search term
rpp=5                              the number of tweets to return
geocode                          return tweets within a given radius of a given lat,long

Beginning in 2009, tweets have had the capability of being georeferenced.

When placed into a browser, this is what is spit out:

According to this, the following are the key parameters we need:

  • object -> results (array) -> from_user
  • object -> results (array) -> location
  • object -> results (array) -> geo
  • object -> results (array) -> profile_image_url
  • object -> results (array) -> text

Displaying the Tweets
Now that we have the tweets, we need to map them.

//Function to get data from Twitter
ryan.getTwitter = function()
{
    removeLayer();
    bounds = new google.maps.LatLngBounds ();
    $.getJSON('http://search.twitter.com/search.json?rpp=25&amp;geocode='+map.getCenter().lat()+','+map.getCenter().lng()+',20mi&amp;callback=?',
        function(data)
        {
            $.each(data.results, function(i,item){
                if (item.geo == null)
                {
                trace(i + ' no geo data');
                }
                else
                {
                infoWindowContent = '<strong>'+item.from_user+'</strong><br>';
                infoWindowContent += '<img src="'+item.profile_image_url+'"></a><br>';
                infoWindowContent += ''+item.text+'';
ryan.createTwitterMarker(i,item.geo.coordinates[0],item.geo.coordinates[1],infoWindowContent,item.profile_image_url);
                }    
            });
        });
}

If the tweet is geocoded and info window will display the user name, profile pic and the text. However, most Twitter users have no opted into having their tweets tagged. We will address that later.

Mapping the Tweets

//Function to create Twitter Marker
createTwitterMarker = function(i,latitude,longitude,infoWindowContent,icon)
{
    var markerLatLng = new google.maps.LatLng(latitude,longitude);  

    //extent bounds for each Tweet and adjust map to fit to it
    bounds.extend(markerLatLng);
    map.fitBounds(bounds);
    var image = new google.maps.MarkerImage(icon, null, null, null, new google.maps.Size(32,32));

    twitter[i] = new google.maps.Marker({
        position: markerLatLng,
        map: map,
        title: infoWindowContent,
        icon: image
        });

    //add an onclick event
    google.maps.event.addListener(twitter[i], 'click', function() {
        infowindow.setContent(infoWindowContent);
        infowindow.open(map,twitter[i]);
        });
}

Creating a function to Geocode the results

The following code creates a variable to geocode the results and addresses the non-geotagged tweets by using the location value which is the city in which the tweet was received.

//Function to get data from Twitter
ryan.getTwitter = function()
{
    removeLayer();
    bounds = new google.maps.LatLngBounds ();
    $.getJSON('http://search.twitter.com/search.json?rpp=25&amp;geocode='+map.getCenter().lat()+','+map.getCenter().lng()+',20mi&amp;callback=?',
        function(data)
        {
            $.each(data.results, function(i,item){
                if (item.geo == null)
                {
                infoWindowContent = '<strong>'+item.from_user+'</strong><br>';
                infoWindowContent += '<img src="'+item.profile_image_url+'"></a><br>';
                infoWindowContent += ''+item.text+'';
                ryan.geocodeTwitter(i,item.location,infoWindowContent,item.profile_image_url);
                }
                else
                {
                infoWindowContent = '<strong>'+item.from_user+'</strong><br>';
                infoWindowContent += '<img src="'+item.profile_image_url+'"></a><br>';
                infoWindowContent += ''+item.text+'';
ryan.createTwitterMarker(i,item.geo.coordinates[0],item.geo.coordinates[1],infoWindowContent,item.profile_image_url);
                }    
            });
        });
}

Now the geocoder function:

ryan.geocodeTwitter = function(i,genLocation,infoWindowContent,icon) 
{
    var geocoderRequest = {address: genLocation}
    geocoder.geocode(geocoderRequest, function(results, status) {
        if (status == google.maps.GeocoderStatus.OK) 
        {
ryan.createTwitterMarker(i,results[0].geometry.location.Da,results[0].geometry.location.Ea,infoWindowContent,icon);
            trace(i + ' lat/long');
            trace(results[0].geometry.location);
        } 
        else 
        {
            trace("Geocode was not successful for the following reason: " + status);
        }
    });
}

The above function will run the geocode parameters of the tweets through Google’s geocoder function which is designed to place a marker on a map using the twittermarker function. The problem is that since most tweets are not geocoded this will result an a group of tweets stacked on top of one another.

Conclusion
My precedent studies touted the value of geocoded social media in times of natural disasters and man-made crises such as wars. However, given the lack of participation by most users, the value of this proposal is severely limited. Therefore tracking tweets may not be the best way to track larger movements of people. One may try tracking something which is more likely to have geocoordinates automatically assigned such as photos. Flickr would be a natural next step.

BUS ROUTE 29 2000 TO 2010 COMMUNITY VITALITY

THE HUNCH:

The Chicago Housing Authority severely affected community vitality by demolishing Housing Projects between 2000 and 2010.

Community Vitality measured with:
CTA ridership
Per Capita Income
Population

It is difficult to draw a correlation between the CHA initiative of public housing destruction and any significant changes in neighborhood quality.

The values are too divergent to draw a conclusion. However, this was a great exercise for learning how to use object-relational database management system, generate spatial queries and effectively render geographic information all with open-source software!!

ARCH_497_S_MCCREE_S_2012

A STUDY IN CHICAGO RECYCLING SYSTEM

by Soledad Hernandez & Pinar Dursun

Problem

The millions of tons of waste disposed of into our environment every year. As urban growth continues to take hold in many cities, our levels of all types of waste, combined with the problems created when it comes to disposing of them, are constantly increasing. In front of this situation, an efficient management waste system can solve a basic problem in the cities.

Chicago generates 7,299,174 tons of waste every year and residents recycle just more than 200,000 tons of materials per year.

Chicago has two recycling systems: Blue Cart and Drop-off

The goals

The aim of this project is to analysis how is the recycling system en Chicago

Examining the effectiveness of Chicago Recycling System.

How the recycling system of Chicago can be improved?

Recycling amount distribution by location.

Correlation between recycling amount and demographical information.

Process

After preparing the spreadsheets of the data and uploading them in iituim server database via PostgreSQL software as tables, shape files were imported to QGIS software where the contacts between the amounts and locations were made. Thus, equations for analysis were created.

Maps

Conclusion

22 wards have neither drop-off center nor blue cart system. 1,186,364 people living in this wards without any recycling service.

It is obvious that blue cart system is more efficient than drop-off centers due to its easiness. Travelling miles to throw the recyclables into the drop-off center instead of putting them into the blue cart in front of their house is a dissuasive effect for the people who do not live in blue cart covered neighborhoods.

It also seems like the people living in the north neighborhoods are more eager to recycle. Northwards can be proposed for the location of blue cart area future expansion.

For complete presentation with all the maps: RECYCLING 5.4.12

Fire Safety in High Rise Building

Chicago is full of the sky scraper and fire safety is one of the biggest issue in high rise building.  Fire poses a particularly serious threat in high-rise buildings.  First, it is difficult for fire fighters to reach the upper floors; for example, the highest fire truck ladder in Chicago only extends to the eight floors.  To extinguish blazes above that point, fire fighters must sometimes climb dozens of flights of stairs, dragging fire hoses and other heavy equipment with them.

Second, large buildings populations are difficult to rapidly and safely evacuate.  Since elevators do not offer a safe means of exit during a fire, thousands of people may be forced to descend crowded stairs. But the dangers are intensified in the noise, smoke darkness, and confusion of a high-rise fire, particularly for those attempting to escape from an upper floor.

 My idea behind this study is to create a building model based on cityGml and GIS  which can be helpful to emergency crew in emergency. Technologies such as GIS and CityGML can be used for analyzing and visualization of spatial data and 3d city model which is helpful in planning, simulation and training.

Data is available in different from following sources and it poses challenges for extracting useful information

  • Different sources of Data: city database. owner, architectural firms, emergency communication services
  • Different types of Data: Tabular data, spatial data, 3D models, shape files, floor plans etc.or Google Floor Plan Project
  • Different Format of Data: point, Date, co-ordinates etc
  • Various non-consistent schemas of Data, Eg, Some data is available in GityGML and other in GIS.

In order to maintain high integrity among various data sources we have to model our data based upon standards.  After that we can import our data into a common data store.  In case of building data, it can be model according to IFC standard or CityGML  standard for 3D building model and 2D data can be model in GIS format.

After we model data we use Data Mining techniques to extract useful information from our data. We can use different AI techniques to find different uses of our data.

UML diagram-Building Model

Scenario -1

  • If a Fire is detected on multiple floors of a high rise building then AI can help fire fighters to create smart evacuation plans on the fly using digitized floor plans and building sensors.

Scenario- 2

Planning of rescue operation

  • Eg, which window on 5th floor is accessible by fire ladder or where are building with large roof for helicopter landing.

Well, The newly unveiled concept project of Google, Google Glass, would be an ideal technologies useful for fire fighters.

Google Glasses Preview

a. Benefits of using Google Glass would be that fire fighters in case of emergency don’t want to look at another screen which is their phone, tablet or anything else.  Having a head mounted display would help them to focus on their jobs rather than on technology.  Moreover, this technology would also free their hands from using a device and they could help people.

b. Google Glasses technology  uses natural language processing which enables people to talk to a technology in human language rather than machine language.  Hence this kind of technology would be much friendlier to use, easier adoption rate, etc.

Tablet and Augmented reality Apps

a. Tablets with newer form factors and newer technologies could come very handy for fire fighters working in the field.  Having all the data they need on their tablet could help them make decisions quick which help save lives.

Vision of a building model using augmented reality

In conclusion, with use of urban information modeling and augmented reality we can not only reduce response time from fire station to disaster site, but also provide effective and safe escape from building and rescue operation.

With the use of CityGML and augmented reality, simulation of event can be useful to train new personnel such as fire fighters, police men, etc

Fire Simulation for Training

Chicago Urban Migration-Wigdan Al-Guneid,Sakshi Aggarwal

OBJECTIVE

Obtain the relationship between the quality of neighborhood and the migrations of people of people to them. Through the analysis of attributes such as health care, education, transportation accessibility, and crime rate, we were able to indicate the popularity of a neighborhood and wither some attributes are more important than others in ranking the value of a property. Our analysis will be focusing on the relationship between housing value in 6 main neighborhoods from different location of Chicago and amount of people who relocated from or to them.

The neighborhoods chosen are:

-China Town (District 31)

-Hyde Park (District 41)

-Lincoln Park (District 7)

-Logan Square (District 22)

-Garfield Ridge (District 56)

1- The data available with us should be translated into maps that illustrate the quality score for each of the chosen neighborhoods.

2- The analysis of the data should be driven from maps that are done using the CSV files, and shape files downloaded from Chicago website.

Using the Software QGIS, PG Admin will help in driving maps with analysis of Chicago neighborhoods.

In order to understand the pattern of migrations within the neighborhood’s of Chicago, the study had to include a method of scoring the quality of a neighborhood. A quality would be driven from series of calculations and equations that will be translated later into queries in PQ Admin II then into maps.

EQUATIONS

School Equation

School safety score+ family involvement score+ Parent engagement score+

College eligibility+ college enrollment

______________________________________             = School Score

5

Hospital Equation

Accessibility of hospitals+ number of hospitals

______________________________________= Hospital Score

2

Accessibility Equation

School safety score+ family involvement score+ Parent engagement score+

College eligibility+ college enrollment

___________________________________________________________= Accessibility Score

5

Neighborhood score Equation

Home value index +School Score +Hospital Score + Accessibility Score +

___________________________________________________________= Quality of

Neighborhood

4

TABLES & MAPS

After   checking the accuracy of the data in the tables that are done with POST ADMIN II, queries were made to connect the tables we have to the QGIS .This step is important in order to match the data to the shape file of the neighborhoods of Chicago city.

After making the maps, analytic comparison driven from the visual information in the maps, and also from the value of the quality averages driven from the neighborhood’s score equations.

Summary:

The quality scores showed that the higher quality neighborhoods were Lincoln Park, followed by Logan Square, Garfield Ridge, Hyde Park, China Town, and Avalon Park.

This conclusion however, matched the Real Estate value of Lincoln Park and Logan Square, and Avalon Park only. Hyde Park and China Town had different real-estate values that didn’t correlate necessarily to the neighborhood quality.

This led us to a conclusion that there are other reasons such as ethnic backgrounds of populations living there. China Town is a neighborhood that is heavily populated with Asian communities, Same as Hyde Park that is also heavily populated with African Americans.

The study was only focusing on amenities that a neighborhood would offer and if they would affect the decision of selecting a neighborhood or not.

Project Description: Chicago Crime

My final project is a highly in-depth analysis of crime in Chicago. The study focuses primarily upon the last eleven years of available crime data and emphasizes socioeconomic trends and correlations within the field of “Deceptive Practices.” The project is subdivided as follows:

Part I: “The Residential and Commercial Components of Deceptive Practices.”

Statement of Intent: After the fall of Enron and other multinational corporations in 2001, a major crackdown on corporate and commercial fraud was instituted under the Bush administration and the Security and Exchange Commission [SEC]. Since many corporate and commercial entities reside in Chicago, the first stage of this project maps fraud and other crimes of deception within both the residential and commercial realm. It is hypothesized that since the 2001 ordinance, individuals may be more discouraged from committing fraud within a commercial setting, and – therefore – will be more likely to commit these acts at home. This initial section deconstructs all 150,000 acts of fraudulent crimes and examines each in graph and map form on an annual basis.

Process: Data was imported from the database into Quantum GIS where it was converted into heat maps so as to show which wards engaged in the largest number of deceptive practices per year in both a commercial and residential setting. An additional heat map for each year was produced to show which wards had the greatest ratios of residential to commercial-based fraud by simply dividing the former total by the latter for each ward and re-mapping. After totals for each ward were uncovered using the “points in polygon” query in QGIS (an exhausting process considering the large amount of data), the totals were re-entered into a spreadsheet in order to graph the distributions. Lastly, three graphs showing the overall distribution of deceptive practices for all years were created in order to draw conclusions regarding the distribution.

Results: The final graphs are quite interesting. Though the initial study is somewhat skewed since data from the preceding decade was unavailable for comparison, the final graphs suggest a large decrease in residential and commercial based fraud in wards 1-10 after 2001. Similarly, a close look at the graphs reveals a slight but rather steady rise in deceptive practices committed in a residential setting, as well as a more exponential decrease in those committed in a commercial setting. Similarly, the ratio of commercial to residential-based fraud in almost every ward appears to have increased greatly since 2001, suggesting a reallocation in several wards of where these crimes are committed.

 

Part II: “The Sociocultural Correlations of Deceptive Practices.”

Statement of Intent: The original intent of the second portion of the project was to investigate various sociocultural components of Chicago and see if there was any correlation between where they occur and where deceptive practices occur. In terms of selecting these components, it was very important that they relate, in some way, to the pathology of why one commits crime. Therefore, this next series of maps investigates the link between a ward’s level of fraud and its median home-value, its adjacency to educational facilities, as well as its rate of population increase/decrease since 2001.

Process: Heat maps of all three components were generated and, once again, each ward’s data was extracted from QGIS’s spatial query functions into spreadsheet format for graphing. The graphs in these sections are especially important since they were used to determine the coefficient of correlation for each query. This was done by creating a graph of the D.P.’s occurring in each ward and superimposing the corresponding graph (whether it was schools per ward or home value per ward) on top of it. By doing so, I was able to get fifty x-points [the DP’s per ward] and fifty y-points [the compared value per ward] from each graph, and input them into a correlation calculator. By doing so, a separate coefficient of correlation for each relationship-query was produced.

Results: While the previous section’s results confirmed their initial hypothesis, the coefficients of correlation were quite low in this section. The highest positive correlation was over .25, suggesting only a moderately affirmative correlation. Despite this, however, the comparison between schools and D.P.’s per ward resulted in a rather strong negative correlation of -.456, suggesting that a lower number of schools in a ward corresponds with a higher number of deceptive crimes within the same boundary.

 

Part III: “The Urban & Exurban divide in Chicago Crimes”

Statement of Intent: Whereas the previous sections examined deceptive practices in close detail, this last area of crime analysis examined all Chicago-based crimes from 2001-2008 on a formal and graphical level. While, at first glance, the largest appropriation of crime typically occurs in the downtown area, this negates the effects that each ward’s population has on producing the distribution. For example, while it may appear as though there are a large number of crimes occurring in one ward, it may be because the population in that ward is so much higher. I was, therefore, determined to examine whether most crimes actually do occur in greater number in the downtown area or in the suburban/exurban regions of Cook County by re-contextualizing the total crimes per ward.

Process: In order to visualize the distributions of each crime, maps were generated in a very recursive manner showing the point-based assortment of crime across the region in two-year intervals. Spatial queries in QGIS where subsequently executed to determine the number of crimes in each ward for every two-year interval.  After that, the total number of urban/downtown-based crimes was divided by the total downtown population, and, conversely, the total number of exurban/suburban crimes divided by the total exurban/suburban population. By undergoing this extremely lengthy process, these ratios were able to tell whether each crime occurred more in an urban or exurban setting with regards to the population density. These ratios were subsequently graphed in order to show the change over time, and a very simple heat map for each crime was generated to show the wards in which the corresponding crime occurred the most throughout the eight year interval.