Archive | Uncategorized RSS feed for this section

Crime Stats

15 Jan

In the open data world, crime statistics are an important data set. But, I have had some recent thoughts about crime statistics which I would like to share.

Where do crime stats come from?

When I worked in that field, we had 3 systems in which we could pull from – CAD, ARS and RMS. The systems worked as follows:

  • CAD – are the calls for service. They were the most current. But also, the crime type and information is the most inaccurate.
  • ARS – These are reports that have been approved by an officer and a supervisor(SGT). They are probably 1 to 2 weeks out from the data of occurrence but are more accurate than CAD.
  • RMS – These are reports that have been approved by records employees and are the final, authoritative source of information. These are possible a month old but are the most accurate.

As you can see, as time increases (we get further away from the data of occurrence) the accuracy of the data associated with the incident increases. Do you know which source the data set you are using comes from?

The City of Albuquerque publishes crime incidents. According to the metadata document:

This dataset contains the block location, case number description and date of calls for service received by APD that have been entered into the case management system and approved by a supervisor.

This leads me to believe that the data would be from ARS. Meaning it is a few weeks behind, but fairly accurate. Looking at the data, the most recent incident is January 8th. Today is the 15th, so it’s a week behind. About right for ARS. Having this metadata means I can be confident in what I am measuring. Good Job Albuquerque!

Which System Should we Query?

If you want to see what is happening now, you need to query CAD. But these data are only good for getting an idea about where incidents are occurring, not what types of incidents.

If you want to look at a month of crime, you should use ARS.

And for any longer historical crime analysis, RMS is the way to go.

But Wait, I have an Issue with ALL Crime Data

First, most crime data only lists the highest charge. One crime. If I shoot someone during a drug deal, this is a murder/homicide. No mention of distribution. If you are studying drug crimes, this will not show up. That is a problem.

We had a case in Albuquerque recently where an individual stole a vehicle that was running. They probably would charge him with unlawful taking of a motor vehicle (not auto theft). But there was a child inside the car. Now it’s a kidnapping. The report would pick the highest charge, kidnapping. But the real criminal intent was auto theft/unlawful taking. As much as I want to see that individual locked away for what he did, the crime statistic does not properly reflect the crime – it actually neglects the primary offense. And this can now be exacerbated by my next issue with crime data.

Lastly, here is a scenario: a person calls 911 because of a shooting, an officer is dispatched, an arrest is made, an attempted murder report is filed and it makes it to RMS – the official record. Everything is good. But the case goes to trial – or it doesn’t because of a plea deal – and the individual is charged/pleas and found guilty of aggravated assault. The court is the official record. A crime of attempted murder never occurred at the location, on the date that the report states, an aggravated assault did. What if the person was found not guilty? A murder never occurred. But the police say it did? Is that libel?

I know this may seem nitpicky, but given the unbelievable number of plea deals and reduced charges, how accurate are our police reports – probably more so than the final case disposition but that is the final truth. Crime stats are not crimes, they are charges if we don’t use the final disposition.

I think this is a new area for crime research. A department reports to the FBI UCR based on RMS, but those charges may not actually be what the courts decided. I would love to see the difference between charges in RMS and final disposition. Maps comparing crimes with final disposition should show much lower levels of crime and far fewer felonies.

Just something to think about.

 

 

 

Advertisements

Creating a Tile Server in Go and Consuming in Leaflet

16 Nov

I have used TileMill to create tiles and use them as a custom basemap in Fulcrum app and I loved it. I wanted to be able to do this in my web maps with Leaflet. After a short search, I found the answer.

What I Wanted

There are tile servers already made for you, such as TileStache, but I wanted to run a small, simple server on windows and I couldn’t modify IIS, or install Apache. How can I do this simply, without infrastructure?

What I Knew and Didn’t

I know a tile server splits data up in to images and serves them based on zoom level and row and column.

I know TileMill is awesome and will export MBTiles for me.

I have no idea what an MBTiles file looks inside.

Github to the Resuce

I found a PHP tile server by Bryan McBride on Github. It was the perfect example. From the code, it was clear that MBTiles are just SQLite DBs. What?!?! That is brilliant and makes using them simple. You just need to query for zoom level, row and column – as Bryan did n his code.

I installed WAMP because it is the easiest way to get Apache and PHP on my machine to test the PHP Server with my own tiles. It worked with no problems. So I know I can generate tiles and server them. Now I need to create a solution that did not require Apache, PHP, or changes to IIS.

My Solution

I chose to work with Go. It would create a simple .exe I could run on any machine.

I needed a SQLite library. I chose go-sqlite3. I grabbed my go to webserver code and started trying to connect to the MBTiles I created. From Bryan’s code, I know there is a table called tiles with columns for zoom_level, tile_column, tile_row and tile_data. Is that all? Are there other tables? And what are the data types in these columns, because Go will make me specify them (there are ways around this).

I googled MBTiles Spec and there it was, on Github posted by Mapbox.  Now I know there is a metadata table as well and all the columns in each table and their types. I started by querying the metadata table just to verify my database connection worked.

Connection to metadata table

Connection to metadata table

Once I got a response, I went to work on the tiles table. I need to connect, query and return a PNG with the data.


func Tiles(w http.ResponseWriter, r *http.Request) {
w.Header().Set(“Access-Control-Allow-Origin”, “*”)
w.Header().Set(“Content-Type”, “image/png”)
vars := mux.Vars(r)
z := vars[“z”]
x := vars[“x”]
y := vars[“y”]
db, _ := sql.Open(“sqlite3”, “./tiles.mbtiles”)
rows, _ := db.Query(“SELECT * FROM tiles WHERE zoom_level = ? AND tile_column = ? AND tile_row = ?”, z, x, y)

for rows.Next() {

var zoom_level int32
var tile_column int32
var tile_row int32
var tile_data []byte
rows.Scan(&zoom_level, &tile_column, &tile_row, &tile_data) //tile_data blob)

w.Write(tile_data)
}
db.Close()

}


The above code is the handler function for the route:

Route{ “Tiles”, “GET”, “/{db}/{z}/{x}/{y}”, Tiles,}

I set two headers, one that allows cross origin and another that specifies I am returning an image.

I then grab the variables from the route. I only grab z,x and y. I have {db} in the route, but I am hard coding this in the map for now. In the future, by passing it, I can use one route to grab different tiles.

The query passes parameters using ? and then specifying a variable for each ?.

Lastly, I loop, scan and write out the results. the great part is I read bytes in tile_data and w.Write wants bytes. No conversion of types needed.

I now have a tile server. The complete code is on Github.

Connecting to server and getting back an image (tile)

Connecting to server and getting back an image (tile)

Connect to the Server from Leaflet

Connecting from leaflet is as easy as creating your standard map, then adding an L.tilelayer:

var mbTiles = new L.tileLayer(‘http://localhost:8080/tiles/{z}/{x}/{y}’, {
tms: true,
opacity: 0.7
}).addTo(map);

The URL to the server is our Route: /{db}/{z}/{x}/{y}. I hard coded the {db} so you will see in the URL it already has tiles and starts at {z}.

You can now watch the network traffic in the developer tools of your browser and see the requests for tiles at the different zoom levels. Using tiles loads my data seconds faster then when I bring it in as layers.

Download the code, run go build and then drop the server in a folder with an mbtiles file named tiles and you are ready to go.

If you want a PHP Tile server option, Bryan pointed me to this one.

QGIS and MongoDB

20 Feb

There was an excellent plugin for Qgis that used MongoDB. I have found a copy on my old computer and put it on GitHub.

Albuquerque Elevation using Turf.js

19 Feb

The City of Albuquerque publishes contour data at two foot intervals. This is really cool, but unfortunately, to load it in a map, you would need to manually page the results(paging will be introduced in ArcServer 10.3) because the service only returns 1,000 features and there are 706,840 features- that would require almost 3/4 of a million queries. even if you could display that many results, your map would probably get pretty slow. So how can we use this data? In this post, I will show you how to create an elevation service that will allow you to get an approximate elevation at any point in the city.

The Application

The application will query the contour service at the City of Albuquerque to get back an answer. The contour service contains lines, and to query it we will use a point (the user click). This means we need our point to hit the line – which would take a real stroke of luck on our part. To work around the intersection problem, we will use Turf.js to buffer the point and pass the geometry to the service. Now we have the intersection of a polygon with a line – a much easier query. The finished application is shown below.

Elevation at a point in Albuquerque

Elevation at a point in Albuquerque

The Logic

We need to allow the user to click on the map and create a buffer. We can do this by combining Leaflet.js with Turf.js.

map.on(“click”,function(e){

a=L.marker(e.latlng);
var b=a.toGeoJSON();
var buffered = turf.buffer(b,0.01,”miles”);
var result = turf.featurecollection(buffered.features.concat(b));
var g='{“rings”:’+JSON.stringify(buffered.features[0].geometry.coordinates)+’}’;

Now we have a buffer we can pass to the ESRI REST API in our variable g. We make the call to the endpoint using our standard AJAX call.

var url=”http://coagisweb.cabq.gov/arcgis/rest/services/public/contours/MapServer/0/query”;
var params=”f=json&outSR=4326&outFields=*&geometryType=esriGeometryPolygon&spatialRel=esriSpatialRelIntersects&inSR=4326&geometry=”+g;
var http = new XMLHttpRequest();
http.open(“POST”, url, true);
http.setRequestHeader(“Content-type”, “application/x-www-form-urlencoded”);
http.onreadystatechange = function() {//Call a function when the state changes.
if(http.readyState == 4 && http.status == 200) {
var d=JSON.parse(http.responseText);
a.bindPopup(“<h3>”+d.features[0].attributes.ELEV+”</h3>”).addTo(map)
a.openPopup();
}}
http.send(params);

We add a marker to the map on the location of the user click and then bind a popup to it passing the elevation.

Revit on the Web: Using MVC and Database Views

4 Feb

In my previous post, I showed how you could use MVC and the Entity Framework to create a webpage that displays and edits a Revit model without typing any code. In that post, you connected your page to a table in the Revit Database. But what if you do not want the whole table to be editable or returned? In this post, I will show you another way to display your Revit data.

The Use Case

Using tables is the easiest way to connect to the information you need in Revit. But not everything in the table may be needed. During my days as a facility planner, I needed clients to edit room information. If I connected to the rooms table there would be way too many, most of which would be unnecessary. I was only interested in classrooms, so why should I display the hallways, electrical closets or bathrooms on my page. Also, I do not want the client to accidentally modify one of these rooms. I only want them to see the relevant data. Nothing more. Nothing Less.

The Database

While you could probably modify the model in the MVC Application to handle the filtering and removal of rooms, it is much easier to do so in the actual database. The database allows you to create views. A view can be a single table queried or multiple tables all joined in to one table. To test out a view in my MVC application, I created a view of rooms where the area was greater than 10.

Select * from rooms where area >10. Returns 7 rooms.

Select * from rooms where area >10. Returns 7 rooms.

Results

Now, you can create your MVC application as you did in the previous post, but this time you will select a view instead of a table. This view is live. If a room is added to the model and it has an area greater than 10, it will show up in our view. I have pulled the database back in to Revit and it is clear that Revit does not care if our database has views. It seems that Revit only pulls the tables back – and only the table or tables you specify. the image below shows the tables page of my application with only 7 records.

Rooms page based on view

Rooms page based on view

 

A Street Bump Knockoff: Mobile Phone Acceleration

21 Jan

Some time ago, I read about an app that uses your phone to detect speed bumps. It sounded cool, but I didn’t give it much thought. Recently, I was asked a question about the app and took a harder look at it. The app is call Street Bump and is produced by the City of Boston. The app won a Best of the Web Government Achievement award in 2013 for the Government to Citizen category. The first thing that came to mind when reading about the app was could I build it. This article will walk you through how I built a very simple version of the Street Bump app.

My best guess was that the app used the z coordinate of the mobile device. I started reading about the HTML5 APIs that would be needed. As I read, I found that there is a DeviceMotion event that has an acceleration attribute. This is what I will use. Let’s think about it. You are driving down a road and the phone will move up and down. Hitting a pothole, your phone should change elevation quickly. The elevation change, however, will be minimal. Acceleration should be the right choice.

The Final App. A Chart Streaming Data and a Map With Pothole Locations

The Final App. A Chart Streaming Data and a Map With Pothole Locations

The first thing I wanted to do was read out the acceleration values.

window.addEventListener(“devicemotion”,function(e){
console.log(e.acceleration.z);},true);

Now I can see a bunch of numbers in the console. I want the mobile phone user to be able to see them too – there is no console on your phone. I decided to use Smoothie Charts to stream the data to the webpage.

The second part of the app is that the coordinates are sent to a server. To accomplish this, I used Leaflet.js map.locate(). It is just a wrapper around the HTML5 Geolocation API, but it lets me draw a map at the same time using a single API.

The final code

  1. Creates a map and a line for the chart.
  2. It draws a blank map, sets up the location events and functions and initializes the chart.
  3. The action occurs in the devicemotion event listener. When the event is fired, the code adds a time stamp and the z value to the chart and calls map.locate() to get the coordinates and draw a point.

Here is the full JavaScript for the app.

var raw=[];
var map = L.map(‘map’);
var line1 = new TimeSeries();

window.addEventListener(“devicemotion”,function(e){
line1.append(new Date().getTime(), e.acceleration.z);
console.log(e.acceleration.z);
raw.push(e.acceleration.z);
document.getElementById(“rawdata”).innerHTML=raw.join(“,”);
if(e.acceleration.z>3|e.acceleration.z<-3)
{map.locate({setView: true, maxZoom: 16});}},true);

var smoothie = new SmoothieChart({millisPerPixel:70,maxValueScale:0.82,grid:{fillStyle:’rgba(192,192,192,0.15)’,verticalSections:5},labels:{disabled:true,fillStyle:’#000000′,fontSize:12,precision:0},timestampFormatter:SmoothieChart.timeFormatter,maxValue:10,minValue:-10});
smoothie.addTimeSeries(line1, { lineWidth:5,strokeStyle:’#000000′});
smoothie.streamTo(document.getElementById(“mycanvas”), 1000);

function onLocationFound(e) {
L.tileLayer(‘http://{s}.tile.osm.org/{z}/{x}/{y}.png’).addTo(map);
L.marker(e.latlng).addTo(map);}

map.on(‘locationfound’, onLocationFound);

You can play with the if statement to change the values making the app more or less sensitive. The Street Bump algorithm is far more advanced than our simple if > 3 or < -3. Also, the Street Bump app sends the data to a server. If three people mark a pothole at the same location, the city will respond. Sending the data to a server would require an AJAX call or better yet, use WebSockets with a Node.js Server.

Ideally, the app is just a data collector sending a stream of acceleration values with (lat,long) for each to a database. The second part of the app is a program that analyzes the data finding multiple values that would signal a pothole. The more data collected, the smarter the app should become. For example, if a single line segment is recorded multiple times, the app should be able to average the values to account for different vehicle speeds, suspensions, etc. This can be feed back to the algorithm to make it smarter.

Geoprocessing on the Client Side with Turf.js

7 Jan

Geoprocessing has been primarily a desktop activity. Using ArcServer, you can publish geoprocessing services. This takes geoprocessing off the desktop but requires communication between a client and server. I don’t mind you downloading my data and processing it on your desktop, but I really don’t like the idea of you using my CPU and memory running some harebrained geoprocessing task off my server. Given the advances in web technology, especially JavaScript, can’t we come up with something better? Can’t we let the client handle the work?

We can with Turf.js.

Using Turf.js you can perform a large number of commonly used geoprocessing functions client side.  In this post, I will show you how to buffer, point in polygon and sum a field for points in a polygon.

Buffer a Point

1. Using Leaflet.js, create a map and add a tile layer:

  • var map = L.map(‘map’, { center: [35.10418, -106.62987],zoom: 9});
  • L.tileLayer(‘http://{s}.tile.osm.org/{z}/{x}/{y}.png’).addTo(map);

2. Create two points using turf.point and Long,Lat.

  • pointOne = turf.point(-106.32568,35.11542);
  • pointTwo = turf.point(-106.33,35.22)

3. The points are now in GeoJSON. to add them to Leaflet.js us L.geoJson.

  • L.geoJson(pointOne).addTo(map);
  • L.geoJson(pointTwo).addTo(map);

4. Buffer a point and assign the result to a variable. Then add the buffer to the map. the buffer function takes a feature (point, line, polygon, feature collection), a distance, and the units (miles, kilometers or degrees).

  • var b = turf.buffer(pointOne,2,”miles”);
  • L.geoJson(b).addTo(map);

Now you should have a map that looks like the one below.

Two points with one buffered.

Two points with one buffered.

Point in Polygon

Now that we have two points and a buffer, let’s perform a point in polygon.

1. Create a polygon from the buffer.

  • var polygon = turf.polygon(b.features[0].geometry.coordinates, {
    “fill”: “#6BC65F”,
    “stroke”: “#6BC65F”,
    “stroke-width”: 5,
    “title”:”Polygon”,
    “description”:”A sample polygon”
    });

2. To PIP, use turf.inside() passing the point and polygon as parameters. the result will be true or false.

  • alert(“pointTwo is inside? “+turf.inside(pointTwo, polygon));

Now you will be alerted that the point is not inside the polygon.

Point not in Polygon

Point not in Polygon

In the previous example, the features did not have any attributes. In the next geoprocessing example, we will calculate a value from points in a polygon.

Using Statistics: Sum

1. This example starts with a Leaflet.js map.

  • var map = L.map(‘map’, {center: [35.10418, -106.62987],zoom: 9});
  • L.tileLayer(‘http://{s}.tile.osm.org/{z}/{x}/{y}.png’).addTo(map);

2. Add a function for iterating through features so we can add a popup.

  • function onEachFeature(feature, layer) {
    layer.bindPopup(“<h3>Add this number: “+feature.properties.title+”</h3>”+feature.properties.description);}

3. Now add your points, but this time we will add properties to the points.

  • var p1 = turf.point(-106,35, {“marker-color”: “#6BC65F”,”title”: 100, “description”: “Not in Polygon”, “someOtherProperty”:”I am another property” });
  • var p2 = turf.point(-106.62987,35.10418, {“marker-color”: “#6BC65F”,”title”: 4, “description”: “In Polygon”, “someOtherProperty”:”I am another property” });
  • var p3 = turf.point(-106.64429,35.14125, {“marker-color”: “#6BC65F”,”title”: 1, “description”: “Also in Polygon”, “someOtherProperty”:”I am another property” });

4. To sum a filed, you will need at least one polygon – you can use multiple as well.

  • var polygon = turf.polygon([ [
    [-106.73355,35.21197],[-106.73355,35.04911],[ -106.51932,35.04911],[-106.49872,35.19177]
    ]], {
    “fill”: “#6BC65F”,
    “stroke”: “#6BC65F”,
    “stroke-width”: 5,
    “title”:”Polygon”,
    “description”:”A sample polygon”
    });

5. Create feature collections for the polygon(s) and points. Add them to the map using an option to call your onEachFeature function.

  • var p = turf.featurecollection([polygon]);
    var t = turf.featurecollection([p1,p2,p3]);
  • L.geoJson(p).addTo(map);
  • L.geoJson(t, {
    onEachFeature: onEachFeature
    }).addTo(map);

6. Now pass the sum function the polygon, points, the field to sum and the name of the output field.

  • var sum = turf.sum(p,t,”title”,”output”);

7. when you click the map you will get the result. Notice the marker with the value of 100 is ignored since it is outside the polygon.

  • map.on(“click”,function(){alert(sum.features[0].properties.output);});

Lastly, when you can click a marker and the popup information is displayed.

sum

sum2

 

Running geoprocessing tasks without having to pass data back and forth from client to server is the way to go. It also means your browser can now work as a simple desktop GIS application.