Crime Stats

15 Jan

In the open data world, crime statistics are an important data set. But, I have had some recent thoughts about crime statistics which I would like to share.

Where do crime stats come from?

When I worked in that field, we had 3 systems in which we could pull from – CAD, ARS and RMS. The systems worked as follows:

  • CAD – are the calls for service. They were the most current. But also, the crime type and information is the most inaccurate.
  • ARS – These are reports that have been approved by an officer and a supervisor(SGT). They are probably 1 to 2 weeks out from the data of occurrence but are more accurate than CAD.
  • RMS – These are reports that have been approved by records employees and are the final, authoritative source of information. These are possible a month old but are the most accurate.

As you can see, as time increases (we get further away from the data of occurrence) the accuracy of the data associated with the incident increases. Do you know which source the data set you are using comes from?

The City of Albuquerque publishes crime incidents. According to the metadata document:

This dataset contains the block location, case number description and date of calls for service received by APD that have been entered into the case management system and approved by a supervisor.

This leads me to believe that the data would be from ARS. Meaning it is a few weeks behind, but fairly accurate. Looking at the data, the most recent incident is January 8th. Today is the 15th, so it’s a week behind. About right for ARS. Having this metadata means I can be confident in what I am measuring. Good Job Albuquerque!

Which System Should we Query?

If you want to see what is happening now, you need to query CAD. But these data are only good for getting an idea about where incidents are occurring, not what types of incidents.

If you want to look at a month of crime, you should use ARS.

And for any longer historical crime analysis, RMS is the way to go.

But Wait, I have an Issue with ALL Crime Data

First, most crime data only lists the highest charge. One crime. If I shoot someone during a drug deal, this is a murder/homicide. No mention of distribution. If you are studying drug crimes, this will not show up. That is a problem.

We had a case in Albuquerque recently where an individual stole a vehicle that was running. They probably would charge him with unlawful taking of a motor vehicle (not auto theft). But there was a child inside the car. Now it’s a kidnapping. The report would pick the highest charge, kidnapping. But the real criminal intent was auto theft/unlawful taking. As much as I want to see that individual locked away for what he did, the crime statistic does not properly reflect the crime – it actually neglects the primary offense. And this can now be exacerbated by my next issue with crime data.

Lastly, here is a scenario: a person calls 911 because of a shooting, an officer is dispatched, an arrest is made, an attempted murder report is filed and it makes it to RMS – the official record. Everything is good. But the case goes to trial – or it doesn’t because of a plea deal – and the individual is charged/pleas and found guilty of aggravated assault. The court is the official record. A crime of attempted murder never occurred at the location, on the date that the report states, an aggravated assault did. What if the person was found not guilty? A murder never occurred. But the police say it did? Is that libel?

I know this may seem nitpicky, but given the unbelievable number of plea deals and reduced charges, how accurate are our police reports – probably more so than the final case disposition but that is the final truth. Crime stats are not crimes, they are charges if we don’t use the final disposition.

I think this is a new area for crime research. A department reports to the FBI UCR based on RMS, but those charges may not actually be what the courts decided. I would love to see the difference between charges in RMS and final disposition. Maps comparing crimes with final disposition should show much lower levels of crime and far fewer felonies.

Just something to think about.

 

 

 

Creating a Tile Server in Go and Consuming in Leaflet

16 Nov

I have used TileMill to create tiles and use them as a custom basemap in Fulcrum app and I loved it. I wanted to be able to do this in my web maps with Leaflet. After a short search, I found the answer.

What I Wanted

There are tile servers already made for you, such as TileStache, but I wanted to run a small, simple server on windows and I couldn’t modify IIS, or install Apache. How can I do this simply, without infrastructure?

What I Knew and Didn’t

I know a tile server splits data up in to images and serves them based on zoom level and row and column.

I know TileMill is awesome and will export MBTiles for me.

I have no idea what an MBTiles file looks inside.

Github to the Resuce

I found a PHP tile server by Bryan McBride on Github. It was the perfect example. From the code, it was clear that MBTiles are just SQLite DBs. What?!?! That is brilliant and makes using them simple. You just need to query for zoom level, row and column – as Bryan did n his code.

I installed WAMP because it is the easiest way to get Apache and PHP on my machine to test the PHP Server with my own tiles. It worked with no problems. So I know I can generate tiles and server them. Now I need to create a solution that did not require Apache, PHP, or changes to IIS.

My Solution

I chose to work with Go. It would create a simple .exe I could run on any machine.

I needed a SQLite library. I chose go-sqlite3. I grabbed my go to webserver code and started trying to connect to the MBTiles I created. From Bryan’s code, I know there is a table called tiles with columns for zoom_level, tile_column, tile_row and tile_data. Is that all? Are there other tables? And what are the data types in these columns, because Go will make me specify them (there are ways around this).

I googled MBTiles Spec and there it was, on Github posted by Mapbox.  Now I know there is a metadata table as well and all the columns in each table and their types. I started by querying the metadata table just to verify my database connection worked.

Connection to metadata table

Connection to metadata table

Once I got a response, I went to work on the tiles table. I need to connect, query and return a PNG with the data.


func Tiles(w http.ResponseWriter, r *http.Request) {
w.Header().Set(“Access-Control-Allow-Origin”, “*”)
w.Header().Set(“Content-Type”, “image/png”)
vars := mux.Vars(r)
z := vars[“z”]
x := vars[“x”]
y := vars[“y”]
db, _ := sql.Open(“sqlite3”, “./tiles.mbtiles”)
rows, _ := db.Query(“SELECT * FROM tiles WHERE zoom_level = ? AND tile_column = ? AND tile_row = ?”, z, x, y)

for rows.Next() {

var zoom_level int32
var tile_column int32
var tile_row int32
var tile_data []byte
rows.Scan(&zoom_level, &tile_column, &tile_row, &tile_data) //tile_data blob)

w.Write(tile_data)
}
db.Close()

}


The above code is the handler function for the route:

Route{ “Tiles”, “GET”, “/{db}/{z}/{x}/{y}”, Tiles,}

I set two headers, one that allows cross origin and another that specifies I am returning an image.

I then grab the variables from the route. I only grab z,x and y. I have {db} in the route, but I am hard coding this in the map for now. In the future, by passing it, I can use one route to grab different tiles.

The query passes parameters using ? and then specifying a variable for each ?.

Lastly, I loop, scan and write out the results. the great part is I read bytes in tile_data and w.Write wants bytes. No conversion of types needed.

I now have a tile server. The complete code is on Github.

Connecting to server and getting back an image (tile)

Connecting to server and getting back an image (tile)

Connect to the Server from Leaflet

Connecting from leaflet is as easy as creating your standard map, then adding an L.tilelayer:

var mbTiles = new L.tileLayer(‘http://localhost:8080/tiles/{z}/{x}/{y}’, {
tms: true,
opacity: 0.7
}).addTo(map);

The URL to the server is our Route: /{db}/{z}/{x}/{y}. I hard coded the {db} so you will see in the URL it already has tiles and starts at {z}.

You can now watch the network traffic in the developer tools of your browser and see the requests for tiles at the different zoom levels. Using tiles loads my data seconds faster then when I bring it in as layers.

Download the code, run go build and then drop the server in a folder with an mbtiles file named tiles and you are ready to go.

If you want a PHP Tile server option, Bryan pointed me to this one.

Graph Databases II: Routing

18 Aug

I wrote a previous post on graph databases that did little more than add some nodes and a relationship. The post required me to install neo4j and python py2neo. It was simple, but it was a start. I have moved on to other things since then, most notably Go. But I have found myself coming back in a round about way. I recently wrote a plugin for Open Trail Data in Leaflet. Trails are a network but the data is really just lines. I thought if we add nodes at the starting points, intersections and eds of the trails, we could use them for routing. I set off to work.

The Data

The first step was grabbing a simple network. I used the Open Trail Data dummy data. I was going to draw nodes then import them in to a graph database, but I decided to shelve that for later – when I know I can route. The data is drawn in the image below.

Open Trail Data

Open Trail Data

The Graph

Next, I needed to create the graph database. I started with all the endpoints of the lines and then added nodes to the intersections. The code below shows how I created two nodes and a path between them.

one = Node(“trail”,id=”1″,junction=”1″)
oneint = Node(“trail”,id=”1″,junction=”1,2,3,4″)
p=Path(one, Rel(“connects”),oneint)

The end result was a graph database as seen in the image below.

Finished Graph Database

Finished Graph Database

Route

Now it was time to route. I do not have weights or costs on the edges. I just wanted to see if I could get from one point to another. I chose node 3 to node 6. They are both on the left. I like it because there are two possibly ways to go. To route, I used the following query:

results=graph.cypher.execute(“START beginning=node(22),end=node(32) MATCH p = shortestPath(beginning-[*..5000]-end) RETURN p”)

The nodes are 22 and 32 because that it the internal ID assigned by the database. My result is a path:

(:trail {id:”3″,junction:”3″})-[:connects]->(:trail {id:”1″,junction:”1,2,3,4″})-[:connects]->(:trail {id:”1″,junction:”1,7″})-
[:connects]->(:trail {id:”7″,junction:”7,6,8,9,12″})-[:connects]->(:trail {id:”6″,junction:”6″})

This tells me the database is going from node 3 to the intersection of 1,2,3,4 to the intersection of 1,7 to the intersection of 7,6,8,9,12 to the end node 6.

You can see the nodes by using

resultPath = results[0][0]
solved=newPath.nodes
for n in solved:
   print n

Now I know all the nodes required to get from one point to another. If I added (lat,long) values to each node, I could have drawn the path on a map.

I will put this in my pocket for later. I am sure it will come up again and I will be better suited to solve a problem. In the meantime, I think I will work on figuring out how to get a standard data format in to the Database – shapefile or geojson?

Bypass Cognos Forms

29 Jul

The City of Albuquerque has a Cognos report that allows you to see employee earnings. You can view the report on their transparency website or you can download a copy on the open data site. I do not want to go through a Cognos front page every time I need the data and I do not want to check when the last time they exported the xls or xml version to the open data site. I want to grab a fresh copy – I really want a service. Since they do not have one, we are stuck using Cognos every time to get the live data. Luckily, we can script it.

Cognos has several JavaScript functions cognosLaunch(), cognosLaunchWithTarget() and cognosLaunchInWindow(). These functions take different parameters and then call cognosLaunchArgArray(). Where do you get the JavaScipt library from? The City of Albuquerque – or anyone who has Cognos installed. The file is located at:

http://cognospublic.cabq.gov/cabqcognos/cognoslaunch.js

You can link to this file in your HTML

http://cognospublic.cabq.gov/cabqcognos/cognoslaunch.js

Now, you just need to know how to format the inputs properly. You can find all the information you need by running the report on the transparency site first. When the report finishes, view the source. You will see all the variables highlighted in the image below:

Cognos Report Source reveals all the parameters.

Cognos Report Source reveals all the parameters.

Now, format the function, passing the correct data. For cognosLaunch(), you will have the function below:

cognosLaunch(“ui.gateway”,”http://cognospublic.cabq.gov/cabqcognos/cgi-bin/cognos.cgi”,”ui.tool”,”CognosViewer”,”ui.object”,
“/content/folder[@name=’Transparency’]/report[@name=’Transparency Report – Graded employees’]”,”ui.action”,”run”,”run.prompt”,
“false”,”ui”,”h1h2h3h4″,”run.outputFormat”,”CSV”);

Put this in an HTML file in the <script> section and you will launch a window and download the CSV automatically. I have put a file on GitHub. There is another example which includes HTML and a JS file. The CABQ.js file formats the function for you. In this file, you could pass optional search parameters. I will leave that part up to you – I like grabbing all the data.

You can pas different outputFormats as well – CSV, HTML, PDF, singleXLS, XHTML, XLWA, and XML. Lastly, the City does not allow ajax calls from cross domain, so you may need to have the CORS extension installed in chrome. You can get it from chrome.google.com.

How would I use this in production? I think I would run a simple Python(cherrypy) or Go server that hosts the HTML and JS. Then I would write my application to call the site. I know where the download will go, so I could parse it when done. Then I could either return it to the server or do something with it on my machine.

ESRI REST Geocoding Service in Go

9 Jul

The first thing I wanted to do with Go was to connect to a REST Endpoint. This is an easy task, something I do a lot, and it provides some great possibilities for larger applications. The full code is on GitHub Code, but I will walk through it section by section below.

First, declare the package and import the needed libraries.

package main

import (
“fmt”
“net/http”
“io/ioutil”
“encoding/json”)

We us fmt to print out the result to the console, net/http to connect to the ESRI service, io/util for reading the data the page returns and encoding/json to read the results -which are returned as json.

Next, we need to define the structure of our results.

type geocoderesult struct{
Candidates []candidates
}
type candidates struct{
Address string
Location struct{
X float64
Y float64
}
}

The GeoJSON result from the geocoding service will have a bunch of data and the results are stored in an array named candidates. You need a struct that will grab that result set. The struct geocoderesult has a Candidate []candidates. We need to define candidates. The second struct defines the candidate as contianing an Address and a Location. Location is also a struct that contains and x and y value. The structs match the JSON response.

Now, we create our main function to run the program.

func main(){

response, err := http.Get(“http://server/Geocode/findAddressCandidates?Street=100+lomas&f=json&#8221;)
if err != nil {
fmt.Printf(“%s”, err)

}else{}

}

The function connects to the service and passes the parameters street and f. These are part of the ESRI REST API and I have explained them in other posts. The result of http.Get returns a response and an error. If there is an error, we Printf it. In the else statement, we can print the results.

defer response.Body.Close()
c, _ := ioutil.ReadAll(response.Body)
var data geocoderesult

json.Unmarshal(c,&data)
fmt.Println(data.Candidates[0].Address)
fmt.Println(data.Candidates[0].Location)

We close the Body when we are done with it – defer allows us to keep the close close to where we open it. Next, we read the body of the response. In Go, there are a lot of functions that return more than one value (a value and an error). If you do not want to use one of the values returned, you need to use an underscore. Otherwise, if you declare it with a name and don’t use it later, the application will not run.

The next line declares data as a new empty geocoderesult. We then unmarshal(decode) the json response – c – to data. Notice the ampersand before data? Go uses pointers. The ampersand says that we want to put the unmarhaled json data in to the location of data in memory. Basically, replace the current value of data with the json.

Lastly, we can grab the first result returned from the service and print it. Indexes start at 0. Data contains a slice of Candidates so we can index on it using data.candidates[x]. Then candidates has fields for address and location.

Now you can build the application and run it. To make it more useful, you could have it read a textfile of addresses and write out a new file with their locations. Or, reverse geocode a series of points. Then you could build the binary and run it all the time.

In my next post, I will show you how to wrap this code in to a library so we can call it from another program as a function ourlibrary.geocode().

Go Library for ESRI REST

9 Jul

In my last post, I showed how to consume an ESRI REST geocoding service using Go. In this post, I will consume a feature service and query it. This post will take it one step further. We will put the code in a library so that anyone can use it in their applications. The full code is on GitHub.

Usually, the first line is package main, but we will change it to the name for our new package. then we can import the same libraries as in our geocoding sample.

package esrirest

import (
“fmt”
“net/http”
“io/ioutil”
“encoding/json”

)

The structs that we will create will be different as well. In the geoocoding example we wanted candidates. When running a query, we want features.

type query struct{
Features []features
}

ESRI GeoJSON uses Attributes instead of properties, so we will create a map of type interface for the attributes in the feature. This allows us to grab any data types without having to hardcode them as we did in the geocoding example (address, location {x,y}).

type features struct{
Attributes map[string]interface{}
}

In case we need it later – I ever get around to adding more functionality – I will put the variable data outside the function.

var data query

Now, instead of writing a main() function, we will write a function Query – an not include a main in this program because it is a library, and not meant to be executed stand alone. The Query functino will take a string (URL with parameters) that points to the ESRI service. It returns a slice of features. The rest of the code looks exactly like the geocoding example.

func Query(s string) []features {

response, err := http.Get(s)
if err != nil {
fmt.Printf(“%s”, err)

} else {
defer response.Body.Close()
c, _ := ioutil.ReadAll(response.Body)
json.Unmarshal(c,&data)

}
return data.Features
}

The one difference is that the function returns the slice of features – data.Features. You now have a functioning library that you can use in any program.

Use the Library

I will now show you how to use the library.

Run the command:

go get github.com/PaulCrickard/Go/esrirest

This will grab the plugin and install it locally for you. Now write your application.

Declare the package to be main and import the needed libraries. In this case we only need fmt and our new library that we created, downloaded and installed.

package main

import (
“fmt”
“github.com/PaulCrickard/Go/esrirest”

)

If you look in your pkg directory, you will see a folder with your OS type (windows_amd64) then  github.com, PaulCrickard, Go and a file esrirest.a. That is the path you use for the import.

Create your main function and now just call the function in our library passing the URL.

func main(){
//Pass query parameters in URL. MUST HAVE —–> f=json
d :=esrirest.Query(“http://coagisweb.cabq.gov/arcgis/rest/services/public/PublicArt/MapServer/0/query?where=1=1&outFields=*&f=json&#8221;)
}

All of the data is now in d. We can now access features and attributes.

fmt.Println(d[0].Attributes[“TITLE”]

Or, we can iterate through all the features and print the titles.

for i:=0; i<len(d); i++{
fmt.Println(d[i].Attributes[“TITLE”])
}

What if you do not know what the attribute names (keys) are? You can iterate through and grab them.

for key, value := range d[0].Attributes{
fmt.Println(key,value)

}

This will print out the key and value for the first feature. Throw this inside your for loop above to go through each feature.

You can grab just the keys by using:

for key, _ := range d[0].Attributes{
fmt.Println(key)

}

Or print the values:

for key, _ := range d[0].Attributes{
fmt.Println(d[0].Attributes[key])

}

Now you know how to crate a library, how to install it and how to use it in another application.

 

The Go Language

9 Jul

I am primarily a Python and JavaScript developer  (I throw in an occasional C# application which is usually for creating REST endpoints on the server side). But recently, I have started looking more seriously at Go.  The Go language is not replacing JavaScript for me, but it could replace a big chunk of my Python and C# code.

I see two big advantages in using Go – besides being fast and the whole concurrency brouhaha:

  1. Cross platform
  2. No dependencies.

Before you yell that Python is cross platform too, let me explain. The same Go code runs on Windows, MAC and Linux. So does Python, right? Well, Go code compiles to binaries for each of the operating systems. This is a big deal for me and one of the reasons I like C#. I can just deploy the .exe and it runs. No command line. No .bat file to pretend it is an .exe. This brings me to the second advantage.

Everything you need to run a Go application is in the binary. C# is great, but I can’t count the number of times I deployed an application and the user had an old version of .NET and it didn’t work. With Python it’s even worse. My development machine has every third party library I need, but getting this setup on every machine I want to deploy to is a nightmare (Docker looks like it could help with this).

There are other things I like about Go – godoc, gofmt, goget. There are also things I don’t like (I am not proficient with)  – pointers.

In later posts, I will share my Go code. For now, here is the obligatory Hello World application.

package main

import “fmt”

func main() {
fmt.Println(“Hello, World”)
}

Why Your Org Should Develop Software

11 Jun

If only there was a way to move this to here and make this load automatically in there and all I have to do is press a button, then I could get straight to the part of my job that requires personal attention and knowledge and not have to keep doing the same ten steps over and over.

I hear this a lot. When someone has been in a position long enough, they quickly discover routine tasks that could be automated or ways to make them more efficient. What started as a desire to simplify my job and not waste time performing rote tasks turned in to a career out of writing simple applications to make everyone’s routine tasks easier.

This worked well for me in small companies that lacked resources – and were forward thinking. They didn’t know python from perl and didn’t care. They knew a task that took days now took hours – with no money being spent – and that was enough.

Enter the large organization.

The large organization buys its applications. They have accounting software, web servers, databases, Cognos, SharePoint, and GIS applications. When you have a task that needs to be accomplished, you use one of these applications. If you can’t, go buy something that will.

This line of thinking results in inefficient workplaces.

Joe needs to grab a field from a database and put it on a website. Great, our organization has software for that – Cognos. Does Joe really need a massive Cognos report to display data that really only takes 3 lines of code?  No. So what are his options? Build it or buy it.

These daily tasks that are needed by individuals are often too specific to be solved by off the shelf applications and also simple enough that they could be built in house – the sad part is they aren’t.

The Case Against In House Development

In house development is prevented for a variety of reasons. My favorites are:

Who will maintain it?

  • We can’t have an employee wasting time fixing applications, that costs money.
  • When the employee leaves, who will maintain it then?
  • When we need updates, who will do it.

What about security?

  • In House applications are not secure.

Who is accountable?

  • If we buy it, we have someone to blame when it all goes wrong.
  • We have someone to sue if something goes wrong.

The Case for In House Development

These reasons for not pursuing in house development seem reasonable enough, but we need to examine them and the alternatives.

Who will maintain it?

You need someone to maintain it and it should be your in house development team. It is not a waste of money. If the application results in efficiency gains, they need to be measured with the costs of building and maintaining the application. Who maintains vendor applications? The vendor. But do they maintain it for free? Not always. Need something fixed in your application because you got rid of your image server for ESRI Rest and now your vendor applications don’t work? Too bad. You broke it, not them. Pay, and maybe they will fix the application for you. Did you upgrade to IE 9 because of another application and now your primary doesn’t work. Oh well. Your vendor doesn’t have an IE 9 version yet. If you need a new feature, will your vendor add it? Will they charge you for it? What if your vendor discontinues the product? No support for you – unless you buy the new version or application.

What about security?

I hate this argument. I understand that you have no confidence in your in house developers but to think that because a developer works for a vendor somehow makes their applications more secure is absurd. Let’s just assume that they are more secure for now. Security is measured in what we know today.In 1995, developers were not thinking about SQL Injections. Their applications were secure – as far as they knew. Time proved them wrong. While many rewrote them, we still find these insecurities today. The point is, as technology changes, we see new security holes. While you can protect yourself with what you know today, protecting against the future is difficult. And if you need a fix, will your vendor provide it quickly and cheaply?

Security is not a function of company size and reputation. Microsoft produces software that has numerous vulnerabilities. Of course their software is huge, but so is the company. ESRI ArcServer 10 is subject to cross site scripting vulnerabilities. ESRI has no updates for 10 so you need to buy a newer version. But I haven’t been using 10 for very long and my budgets are tight. Can’t I just get a patch? Nope.

Software is hard and hackers are looking for ways to exploit it. You can code with what you know to be best practices but the key to security is fixing known issues as soon as they arise – not waiting on purchasing to approve a contract for a fix – if your vendor has got around to writing one.

Who is accountable?

Organizations like to be able to point the finger at someone else, but this doesn’t make you function better. If you are a government and are providing a service that doesn’t work, the public could care less who built it, they just want it to work. If they find out you paid a large sum of money for the complicated, buggy, requires a plugin application they are using they are going to be more upset that their tax dollars when toward it.

Go ahead, pass the buck, but your users don’t care. They want to get their jobs done, or interact with your organization. And if it has your logo on it, it’s you even if you didn’t build it. So if you are going to let a vendor control your image and reputation, good luck. At least you can spend even more money to sue them when it fails or when they lose all your customers social security numbers.

One word for you: Healthcare.gov.

The feds paid millions for a website that was a complete failure. Nobody cared that the government didn’t build it. The government got the blame for hiring an inept vendor and for spending millions and getting nothing. Can you name the vendor? No, because even you don’t care – it was the governments fault. In response to this fiasco, the feds started 18F and the US Digitial Service – in house developers. But your organization doesn’t need developers because you’re special.

 

The best software is software you use.

The best software is software you use. Basecamp is a popular project management application that started out as the developers internal project management app. They wrote it for themselves. Now, over 15,000,000 people have used it. Git is another example. Git is a revision control system that was developed by Linus Torvalds for his work on maintaining the Linux kernal. It is the most widely used software management system.

Your organization knows what it needs. If it can build it, you will be better off. I am also a realist. I do not think you should build everything. But for the small tasks that make everyone more efficient, why not?

 

Data and Design

8 May

If you have read any of the posts on this blog, you should know I love data. But what you may not know is that I love architecture, design and a good sketch. I spent five hours getting Study For The War Coffer by Eugene Delacroix tattooed on my chest.

coffer

Often these two worlds collide. I came across a tweet today:

Mindlessly drawing with data? How dare she. I once thought it a good idea to write computer code that could read an architectural program and develop the floor plan automatically. While I still favor some of this thinking, I have had to think it through. And slowly I have come out against it, and I have sided with Tara on this issue.

There is something to be said for hand drawing. The lines made by a pen, with their varying weights, show movement in a still image. There is something beautiful about them. About the process of sketching. Freely moving your hand across the canvas. The AIA had a podcast on Didactic Drawing that really brought it all home for me. On a computer, scale can change. You can draw a hundred foot line and based on your zoom level (scale) it could be a millimeter long. On paper, your scale is fixed. The movement of your hand across the page lets you know how long the line is.

I am not against BIM. But without pre-sketching designs, these program make it easy to create boxes, squares and overall bland buildings, to draw without a set scale, to fully understand and feel the building you are creating.  To design with data is an idea I am still deeply attached to. But I think we walk a fine line between letting data inform design- on how people use buildings for example – to creating the design for us – as in my program example earlier.

Applications like Revit or Grasshopper make it east to start with a simple form – a box – and twist, pull, rotate and skew it to come up with a whole host of possible forms. The results are soulless – though some look really cool. I do not see the art in it. If we are just going to feed some data in to a model to generate a form and say “look at this cool form I created from using the coordinates of all tweets that had the word Gehry in it” then we might as well give up – though I find these kinds of experiments interesting.

Data is, of course, valuable for facility maintenance. I also find value in data on movements of individuals within buildings and with modeling designs for things like airflow, heat, sunlight, etc. These are the kinds of data that can inform design – or confirm that a specific design is a functional design.

I do not want to live in a City full of bland buildings, just as much as I do not want to live in a world full of monuments to the architect that are outrageously out of context. There needs to exist a balance of the art and the science, of architecture and data. And each needs to compliment the other.

 

 

Excel Geocoder

17 Apr

ESRI makes maps for Office. I thought this could be interesting and went to work on trying to insert a map in to a spreadsheet. I did not have much luck. Instead, I decided to throw together a quick macro that would geocode addresses in a spreadsheet and give back coordinates. I do not think VB Script can parse JSON, so the result is ugly, but you get the idea.

Start with a spreadsheet of addresses

sheet

 

Then create a Macro – I copied the code for reading the URL from Ryan Farley.

Sub Macro1()
Range(“A2”).Select
i = 2

Do Until IsEmpty(ActiveCell)

URL = “http://coagisweb.cabq.gov/arcgis/rest/services/locators/CABQ_NetCurr/GeocodeServer/findAddressCandidates?f=json&outSR=4326&street=&#8221; & ActiveCell.Value
Dim objHttp
Set objHttp = CreateObject(“Msxml2.ServerXMLHTTP”)
objHttp.Open “GET”, URL, False
objHttp.Send
Cells(i, 2).Value = objHttp.ResponseText
i = i + 1
ActiveCell.Offset(1, 0).Select
Set objHttp = Nothing
Loop

End Sub

The code starts at cell A2 and reads addresses until it reaches an empty cell. It takes the value and sends it to the ESRI REST endpoint for the City of Albuquerque Geocoding Service. It sets the cell next to it with the results. They should really be parsed, but I am too lazy and was just curious if it could be done. It can. The result is below.

results

I am still thinking of how to embed a web page in the sheet.