Finding alternative routes in California road network with Neo4j

The focus of this blog post is to introduce Yen’s k-shortest path algorithm that was recently added to Neo4j graph algorithms. I will use Californian road network dataset made available by Feifei Li.

Next we will enrich our graph using Google’s reverse geocoding API and then proceed to find the shortest path with Dijkstra algorithm and the second shortest path using Yen’s k-shortest algorithm.

Graph schema

Our graph has a simple schema of nodes labeled Intersection connected with relationship CONNECTION to other intersections.



Lets first define the constraint in our graph schema.


Dataset is split into nodes and relationship files. Lets import them both to get all the data we need.

Import nodes

as row fieldterminator " "
MERGE (i:Intersection{id:row[0]})
ON CREATE SET i.longitude = toFloat(row[1]),
              i.latitude = toFloat(row[2])

Import relationships

as row fieldterminator " "
MATCH (start:Intersection{id:row[1]})
MATCH (end:Intersection{id:row[2]})
MERGE (start)-[c:CONNECTION{id:row[0]}]->(end)
ON CREATE SET c.length = toFloat(row[3])

Reverse geocode API

For every intersection in our graph we can get the address based on the GPS location with the help of Google’s reverse geocoding API . I used apoc.util.sleep(50) to throttle and wait 50 ms between each API call. It cost me around €7 to get this data as I couldn’t find a free version of the API :/.

MATCH (i:Intersection)
CALL apoc.util.sleep(50)
WITH "" + 
toString(i.latitude) + "," + toString(i.longitude) + "&key={google_api_key}" as json,i
CALL apoc.load.json(json) yield value
SET = value.results[0].formatted_address


Lets start with visualizing Santa Barbara’s part of the road network with neovis.js.



Neovis configuration
var config = {
   container_id: "viz",
   server_url: "bolt://localhost:7687",
   server_user: "neo4j",
   server_password: "neo4j",
   labels: {
     "Intersection": {
      "caption": "title"
   relationships: {
     "CONNECTION": {
      "caption": false
   initial_cypher: "MATCH p=(i1:Intersection)-[:CONNECTION]->(i2:Intersection)" +
     "WHERE contains 'Santa Barbara' AND contains 'Santa Barbara'" +
     "RETURN p limit 500"

Shortest path

We use algo.shortestPath, that uses Dijkstra algorithm,  to find the shortest path between “Cathedral Peak Trail” and “5750 Stagecoach Rd”. We set direction:BOTH to treat our graph as undirected.

MATCH (start:Intersection{title:"Cathedral Peak Trail"}),
      (end:Intersection{title:"5750 Stagecoach Rd"})
CALL,end,'length',{direction:'BOTH'}) YIELD nodeId,cost
MATCH (n) where id(n)= nodeId
RETURN n.title,cost

Visualization made with neovis.js.


Neovis configuration
var config = {
   container_id: "viz",
   server_url: "bolt://localhost:7687",
   server_user: "neo4j",
   server_password: "neo4j",
   labels: {
     "Intersection": {
       "caption": "title"
   relationships: {
     "CONNECTION": {
       "thickness": "length",
       "caption": false
   initial_cypher: "MATCH (start:Intersection{title:'Cathedral Peak Trail'}),(end:Intersection{title:'5750 Stagecoach Rd'}) " +
     "CALL,end,'length',{direction:'BOTH',nodeQuery:'Intersection',relationshipQuery:'CONNECTION'}) YIELD nodeId,cost " +
     "MATCH (n) where id(n)=nodeId " + 
     "WITH collect(n) as nodes " +
     "UNWIND range(0, length(nodes)-2) as index " +
     "WITH nodes[index] as from, nodes[index + 1] as to " + 
     "MATCH p=(from)-[:CONNECTION]-(to) " +
     "RETURN p"

Yen’s k-shortest paths

Yen’s k-shortest paths algorithm computes single-source K-shortest loopless paths for a graph with non-negative relationship weights.

It uses Dijkstra algorithm to find the shortest path and then proceeds to find k-1 deviations of the shortest paths. If we want to find the second shortest path we use k=2 as shown below.

MATCH (start:Intersection{title:"Cathedral Peak Trail"}),
(end:Intersection{title:"5750 Stagecoach Rd"})
CALL algo.kShortestPaths(start, end, 2, 'length' ,{})
YIELD resultCount
RETURN resultCount

Results are stored as paths in our graph.

MATCH p=()-[:PATH_0|:PATH_1]->()

Shortest path is visualized in red and second shortest path in yellow. We can easily observe that the paths diverge right at the start and join together 2 hops before the end.



With the addition of Yen’s k-shortest algorithm to the Neo4j graph algorithms library we can now search for alternative shortest routes in our graph. This can come in handy in various domains.


Neo4j A* Algorithm

Just recently A* Algorithm was added to Neo4j graph algorithms and I decided to show how nicely APOC spatial functions fit with it as it uses GPS location for heuristic.


I found this cool github repo geoiq/acetate/ where there is geospatial data available and has many contributors that we can thank to. I picked a file containing a list of 100+ cities in Europe and imported them into Neo4j.

MERGE (city:City{name:})
ON CREATE SET city.population = toINT(row.population)
MERGE (country:Country{code: row.`country code`})
MERGE (city)-[:IS_IN]->(country)

Apoc spatial

Even though the GPS location of the cities is included in the CSV, I did not import it, just so I can show how you can get it yourself using geocoding API, that is hidden in apoc.spatial.geocodeOnce procedure. Find more details in my Neo4j to ELK post and documentation.

MATCH (city:City)-[:IS_IN]->(country)
CALL apoc.spatial.geocodeOnce( + " " + country.code) 
YIELD location
// Save response
SET city.latitude = location.latitude,
    city.longitude = location.longitude

Distance calculation

Lets assume we want to go on a trip through Europe and visit cities on the list on the way. Our only requirement is that we don’t travel more than 250km per day so that we have time and energy left to act a tourist and explore cities.

We will calculate distance between cities and save it as a relationship property between them where the distance is less than 250km. This way we set a threshold to travel less than 250km per day on our trip and still wind up in one of the cities on the list every day.

WITH 250 as distanceInKm
MATCH (c1:City),(c2:City)
WHERE id(c1) < id(c2)
WITH c1,c2,
         point({longitude:c2.longitude,latitude:c2.latitude})) as distance
WHERE distance < (distanceInKm * 1000) 
MERGE (c1)-[l:LINK]->(c2)
ON CREATE SET l.distance = distance

A* Algorithm

A* is an informed search algorithm, or a best-first search, meaning that it solves problems by searching among all possible paths to the solution (goal) for the one that incurs the smallest cost (least distance travelled, shortest time, etc.), and among these paths it first considers the ones that appear to lead most quickly to the solution. It is formulated in terms of weighted graphs: starting from a specific node of a graph, it constructs a tree of paths starting from that node, expanding paths one step at a time, until one of its paths ends at the predetermined goal node.

Taken from Wikipedia

Current implementation of the A* algorithm in graph algorithms uses geospatial information as heuristic function to be able to do best-first search through the graph.

I live near Ljubljana and lets say I want to travel to Amsterdam with my backyard helicopter :).

MATCH (start:City{name:"Ljubljana"}),(end:City{name:"Amsterdam"})
CALL, end, 'distance',
YIELD nodeId, cost
MATCH (n) where id(n)=nodeId

Our graph is not very detailed as it only contains 100ish cities, so I got back quite interesting results. At first glance it doesn’t really seem like the fastest way to get to Amsterdam, but it is definitely an interesting one judging by the cities we would visit and all the scenery ranging from Adriatic sea to Alps and finally flat lands of Netherlands.


Cost is the distance in meters.

Let’s also visualize the resulted path with neovis.js.

The query that we use to get back the shortest path from A* algorithm. Could also be used to visualize in Neo4j Browser.

MATCH (start:City{name:'Ljubljana'}),(end:City{name:'Amsterdam'})
 CALL, end, 'distance', 
YIELD nodeId, cost
MATCH (n) where id(n)=nodeId 
WITH collect(n) as nodes
UNWIND range(0, length(nodes)-2) as index
WITH nodes[index] as from, nodes[index + 1] as to 
MATCH p=(from)-[:LINK]-(to)

I used population of the cities for defining the size of the nodes and distance between cities for defining the width of the relationships.


Code available on github.

Knowledge graph

Lets assume I made it to Amsterdam. Now what? What is worth checking out or visiting?

We can use Google knowledge graph API to find attractions we could visit and enrich our graph with the data. Find more details in my previous post Neo4j APOC triggers and web APIs.

WITH "api_key" as google_api_key
MATCH (c:City{name:"Amsterdam"})-[:IS_IN]->(country)
CALL apoc.load.json("" 
                    + apoc.text.urlencode( + " " + country.code + 
                     "&key=" + google_api_key + "&limit=20&indent=True")
YIELD value
UNWIND value.itemListElement as row
WITH row.result as results,c 
WHERE is not null
MERGE (p:Attraction{})
ON CREATE SET p.description = results.description,
 p.detailedDescription = results.detailedDescription.articleBody
MERGE (p)-[:IS_IN]->(c)
WITH results,p
UNWIND (results.`@type`) as type
MERGE (t:Type{name:type})
MERGE (p)-[:HAS_TYPE]->(t)

Now that we stored the data we got from the API into our graph, we can search for things to do or visit in Amsterdam.

MATCH (:City{name:"Amsterdam"})<-[:IS_IN]-(a:Attraction)
RETURN as attraction,
       a.description as description



I always loved how easily you can scrape the internet with Neo4j and cypher/APOC procedures. Neo4j allows us to easily enrich our graph and make it a proper knowledge graph of our own by combining multiple data sources ranging from other databases to online APIs. Combining this with the graph algorithms it becomes an even more serious analytics tool, that is fun to work with.

Neo4j Graph Visualizations using GoT dataset

Once again I will use the data made available by Andrew Beveridge to first demonstrate the use of categorical pageRank and breakdown pageRank by the sequence of the books, that will help us find the winners of game of thrones and secondly to show some visualizations options Neo4j community has to offer.

To find more details about the dataset check Analyzing the Graph of Thrones by William Lyon or my Neo4j GoT social graph analysis.


Lets first define the schema of our graph.

We only need one constraint on Person label. This will speed up our import and later queries.


As the data of all five books is available on github we can import all 5 books using a single cypher query and APOC‘s load.json.

We will differentiate networks from different books using separate relationship types. We need to use apoc.merge.relationship  as Cypher does not allow using parameters for relationship types. Network from the first book will be stored as relationship type INTERACTS_1, second INTERACTS_2 and so on.

UNWIND ['1','2','3','45'] as book
'' + book + '-edges.csv' as value
MERGE (source:Person{id:value.Source})
MERGE (target:Person{id:value.Target})
WITH source,target,value.weight as weight,book
CALL apoc.merge.relationship(source,'INTERACTS_' + book, {}, {weight:toFloat(weight)}, target) YIELD rel
RETURN distinct 'done'

Categorical pagerank

As described in my previous blog post, categorical pageRank is a concept where we break down the global pageRank into categories and run pageRank on each category subset of the graph separately to get a better understanding of the global pageRank.

Here we will use books as categories, so that we get character’s importance breakdown by the sequence of books.

UNWIND ['1','2','3','45'] as sequence
MERGE (book:Book{sequence:sequence})
WITH book,sequence
 'MATCH (p:Person) WHERE (p)-[:INTERACTS_' + sequence + ']-() RETURN id(p) as id',
 'MATCH (p1:Person)-[INTERACTS_' + sequence + ']-(p2:Person) RETURN id(p1) as source,id(p2) as target',
YIELD node,score
// filter out nodes with default pagerank 
// for nodes with no incoming rels
WITH node,score,book where score > 0.16
MERGE (node)<-[p:PAGERANK]-(book)
SET p.score = score


Biggest winner of the game of thrones by books so far

Basically we will order pageRank values by the sequence of the books and return top ten characters with the highest positive changes in pageRank.

MATCH (person:Person)<-[pagerank:PAGERANK]-(book:Book)
// order by book sequence
WITH person,pagerank,book order by book asc
WITH person,collect(pagerank) as scores
RETURN as person,
       scores[0].score as first_score,
       scores[-1].score as last_score,
       length(scores) as number_of_books 
ORDER BY last_score - first_score DESC LIMIT 10

While Jon Snow leads by absolute positive difference in pageRank, Victarion Greyjoy is very interesting. He had pageRank score 0.59 in the second book, was missing in third, and jumped to 4.43 in fourth and fifth book.

Stannis Baratheon is probably at the peak of his career judging by the show and is surprisingly in second place. Other than that the list is made out of the usual suspects.


I also checked the characters with the biggest negative change, but it turns out that they are mostly dead so it’s not all that interesting.


Thanks to Michael Hunger spoonJS is back. With it we can visualize charts directly in Neo4j browser.

Within a few clicks you can get it set up following the guide.

:play spoon.html

In our example we will visualize characters sorted by pageRank in the last two books combined.

MATCH (p:Person)<-[r:PAGERANK]-(:Book{sequence:'45'})
RETURN as person,r.score as score



Three out of the first four places belong to the Lannisters, with the most important being Tyrion. If you think about it from this perspective what GoT is really about, you might think it’s just a family crisis of the Lannisters with huge collateral damage 🙂

3d force graph

Another cool visualization project by Michael is called 3d-force-graph. It lets us visualize and explore graphs.

We will use pageRank to define the size of the nodes, so that the most important nodes will be the biggest. To represent communities in the graph we use the color of the nodes.

We need to run label propagation or Louvain algorithm to find communities within our graph and store them as a property of the nodes.

We run label propagation using only the network of characters from the last two books.

CALL algo.labelPropagation('Person','INTERACTS_45','BOTH',{partitionProperty:'lpa',iterations:10})

I like this visualization because it is 3d and you can approach from different angles and zoom in and out of the graph while exploring it.



We can also use neovis.js, developed by William Lyon, to visualize graphs. Similarly as before we use label propagation results to color the nodes. To mix it up a bit we will use betweenness centrality of the nodes, instead of pageRank, to represent the size of the nodes in the graph.

Run betweenness centrality algorithm.

CALL algo.betweenness('Person','INTERACTS_45',{direction:'BOTH',writeProperty:'betweenness'})

In the visualization we also defined the size of the relationships based on the weight and colored them according to the community of the pair of nodes they are connecting.



Code for Neovis and 3d-force-graph visualization used in the post can be found on github. Have fun!

Paradise papers analysis with Neo4j

I haven’t used a real-world dataset yet in my blog so I decided it’s time to try out some real-world networks. I will be using the Paradise papers dataset that we can explore thanks to ICIJ.

Paradise papers are a leak of millions of documents containing information about companies and trusts in offshore tax havens that have revealed information on tens of thousands of companies and people, including some high-profile figures like the Queen of England.

We can get this dataset and any of the other leaks at ICIJ official site in CSV or Neo4j Desktop form. If you are lazy like me you can just use Neo4j online sandbox, where you get your own Neo4j with APOC and graph algorithms library setup within a matter of seconds.

Graph model:

We have a graph of officers, who are persons or a banks in real world and entities that are companies. Entities can also have intermediaries and they all have one or more registered addresses that we know of.

I will focus today more on algorithms use, but if you want to learn more about the dataset itself and Neo4j you should check out analysing paradise papers and an in depth analysis of paradise papers by Michael Hunger and William Lyon.


Michael Hunger & William Lyon,An In-Depth Graph Analysis of the Paradise Papers,


Infer the network

As I mentioned before we will focus more on graph algorithms use cases for Paradise papers.

We will assume that officers who are related to the same entity might know each other or at least have some contacts with one another. With this assumption we will create a social network of officers who are related to same entities and have a registered address in Switzerland only. I filtered Switzerland only so we might get a better understanding of their local investment network.

MATCH (o1:Officer)-[:OFFICER_OF]->()<-[:OFFICER_OF]-(o2:Officer)
WHERE id(o1) > id(o2) AND o1.countries contains "Switzerland" 
AND o2.countries contains "Switzerland"
WITH o1,o2,count(*) as common_investments
ON CREATE SET c.weight = common_investments



We start by analyzing the degree of nodes in our network. There are 1130 officer with registered address in Switzerland. Each officer has on average 6 contacts to other officers through his entities.

MATCH (o:Officer)
WITH o,size((o)-[:COMMON_INVESTMENTS]-()) as degree
RETURN count(o) as officers,
       avg(degree) as average_degree,
       stdev(degree) as stdev_degree,
       max(degree) as max_degree,
       min(degree) as min_degree


We can search for pairs of officers with the most common investments as we stored this value as a property of relationship.

MATCH (o1:Officer)-[w:COMMON_INVESTMENTS]->(o2)
RETURN as officer1,
        as officer2,
                 w.weight as common_investments 
order by common_investments desc limit 10

Barnett – Kevin Alan David seems to be very intertwined with the Mackies as he has got 322 common investments with Thomas Louis and 233 with Jacqueline Anne. Actually eight out of first ten places belong to Barnett Kevin Alan David, Hartland Georgina Louise and the Mackies. This would indicate that they cooperate on a large scale.


Weakly connected components

With weakly connected components algorithm we search for so called “islands” in our graph. An island or a connected component is a connected graph where all nodes are reachable between each other and any disconnected part of the global graph is it’s own component.

In our scenario it would be an useful algorithm to search for people who have common investments in companies and might know each other or maybe just have easier access to communicate with.

    'MATCH (o:Officer) WHERE (o)-[:COMMON_INVESTMENTS]-()
    RETURN id(o) as id
    MATCH (o1:Officer)-[:COMMON_INVESTMENTS]-(o2)
    RETURN id(o1) as source,id(o2) as target',
YIELD nodeId,setId
RETURN setId as component,count(*) as componentSize
ORDER BY componentSize desc limit 10

As with most real-world graphs I have encountered so far we get one larger component and some smaller ones. If we wanted we could dig deeper into smaller components and check out their members and see if something interesting comes up.


Lets visualize component 14 for example.


Studhalter – Alexander Walter seem to be quite interlaced with Gadzhiev – Nariman as they have 60 common investments. To complete the triangle there is Studhalter – Philipp Rudolf with 15 common investments with Alexander Walter and 12 with Nariman.  Alexander Walter is positioned at the center of this graph with connection to 8 different officers and we could assume that he holds some influence over this network.


PageRank was first used to measure importance of websites to help users find better results when searching the internet. In the domain of websites and links each link is treated as a vote from one website to another indicating that there is some quality content over there. When calculating pageRank it is also taken into account how important is the voter website as a link from means something completely different as a link from

In the Paradise papers domain we can use it to find potential influencer in our inferred “common_investments” network as officer who have common investments with other important officers in the network will come on top.

    'MATCH (o:Officer) WHERE (o)-[:COMMON_INVESTMENTS]-()
     RETURN id(o) as id
     MATCH (o1:Officer)-[:COMMON_INVESTMENTS]-(o2)
     RETURN id(o1) as source,id(o2) as target',
YIELD node,score
WITH node,score order by score desc limit 10
RETURN as officer,score

Cabral – Warren Wilton comes out on top by a large margin. I checked him out and it turns out he is an officer of 430 different entities and he has got connection to 116 other officers  from Switzerland through his entities. Find out more about Cabral – Warren Wilton. Following is the Swiss Reinsurance Company, which is a shareholder of 19 different entities. You can get same detailed look as above for Swiss Reinsurance thanks to ICIJ.


Harmonic closeness centrality

We can interpret closeness centrality as the potential ability of a node to reach all other nodes as quickly as possible. This works both ways in our example as also other nodes can reach a specific node quickly through shortest paths between them. Harmonic centrality is a variation of closeness centrality that deals nicely with disconnected graphs.

In our domain we could interpret it as the potential ability for “insider trading” as having quick access to other nodes in the network could potentially lead to an advantage such as having access to (confidential) information before others.

    'MATCH (o:Officer) WHERE (o)-[:COMMON_INVESTMENTS]-()
     RETURN id(o) as id
     MATCH (o1:Officer)-[:COMMON_INVESTMENTS]-(o2)
     RETURN id(o1) as source,id(o2) as target',
YIELD nodeId,centrality
WITH nodeId,centrality order by centrality desc limit 10
MATCH (n) where id(n)=nodeId
RETURN as officer,centrality

Cabral – Warren Wilton also leads by harmonic centrality. He seems to be a big player in Switzerland. Swiss Reinsurance Company and PricewaterhouseCoopers are the only two that were also in pagerank top 10 leaderboard. All the others are new candidates we haven’t seen before.  We can deep dive on Schröder – Stefan and observe that he has connections in SwissRe.


Betweenness centrality

Betweenness centrality is useful in finding nodes that serve as a bridge from one group of users to another in a graph. Betweenness centrality in a social network can be interpreted as a rudimentary measure of the control that a specific node exerts over the information flow throughout the graph.

    'MATCH (o:Officer) WHERE (o)-[:COMMON_INVESTMENTS]-()
     RETURN id(o) as id
     MATCH (o1:Officer)-[:COMMON_INVESTMENTS]-(o2)
     RETURN id(o1) as source,id(o2) as target',
YIELD nodeId,centrality
WITH nodeId,centrality order by centrality desc limit 10
MATCH (n) where id(n)=nodeId
RETURN as officer,centrality

The usual players are in the top 3 spots. We can also spot Schröder – Stefan in the fifth spot and the other officers we haven’t come across yet. It’s interesting to see Zulauf – Hans-Kaspar up there as he is an officer of only two entities, but looks like his network position makes him so interesting.


Label propagation algorithm

Label propagation algorithm is a community detection algorithm. Algorithm divides the network into communities of nodes with dense connections internally and sparses connections between communities.

CALL algo.labelPropagation(
    'MATCH (o:Officer) WHERE (o)-[:COMMON_INVESTMENTS]-()
     RETURN id(o) as id
     MATCH (o1:Officer)-[q:COMMON_INVESTMENTS]-(o2)
     RETURN id(o1) as source,id(o2) as target',

To help us with analyzing communities we will use Gephi to visualize our network.

Visualize with Gephi:

I like to use Gephi for visualizing networks. It is a really cool tool that lets you draw nice network visualizations based on centrality and community values.

Check out my previous blog post Neo4j to Gephi for more information.

We would need to save centrality and label propagation results to nodes if we wanted to export them to Gephi. Assuming we have done that we can use the following query to export data from Neo4j to Gephi.

MATCH path = (o:Officer)-[:COMMON_INVESTMENTS]->(:Officer)
CALL apoc.gephi.add(null,'workspace1',path,'weight',['pagerank','labelpropagation','betweeneess']) 
yield nodes
return distinct "done"

Here we have a visualization of only the biggest component in the graph with 344 members. There are 10+ communities that we can easily identify just by looking at this picture. I used label propagation results for color of nodes, betweenness centrality results for node size and pageRank results for node title.

We can’t really see much except that Cabral – Warren Wilton is very important in our network and positioned at the center of it.



Lets zoom in on the center of the network to get a better understanding of the graph.

As we noticed at the start that Barnet – Kevin Alan David is deeply connected with the Mackies and Hartlands. I have also noticed there is Hartland Mackie – Thomas Alan located on the bottom left so this might answer why the Hartlands and Mackies are so deeply connected. We can also find Barnett – Emma Louise in this community, which would makes this community(red) a community of Barnetts, Hartlands, Mackies primarily.

On the bottom right we can find Schroder Stefan very near to Swiss Reinsurance Company.




I think that understanding of the graph and proper visualizations tool is a powerful tool in the hand of an explorer of data. With Neo4j and Gephi we are able to understand the graph and find insights even when we have little prior knowledge about the data and what exactly are we looking for in the first place.

Neo4j Categorical Pagerank

I found this cool Neo4j blog post written by Kenny Bastani, that describes a concept called categorical pagerank.
I will try to recreate it using neo4j-graph-algorithms library and GoT dataset.



Kenny Bastani, Categorical PageRank Using Neo4j and Apache Spark,

Idea behind it is pretty simple. As shown in the example above we have a graph of pages that have links between each other and might also belong to one or more categories. To better understand global pagerank score of nodes in a network, we can breakdown our graph into several subgraphs, one for each category and execute pagerank algorithm on each of that subgraphs. We store results as a relationship property between category and pages.
This way we can break down which are the contributing categories to page’s global pagerank score.



Graph Model:

We will use the dataset made available by Joakim Skoog through his API of ice and fire.

I first encountered this dataset when Michael Hunger showed us how to import the data in his game of data blog post. I thought the dataset was pretty nice and as all I had to do was copy/paste the import queries I decided to play around with it and wrote a Neo4j GoT Graph Analysis post.

Michael’s import query of house data:

// create Houses and their relationships
call apoc.load.jsonArray('') yield value
// cleanup
with, [],['',[''],[],null]) as data
// lowercase keys
with[k in keys(data) | [toLower(substring(k,0,1))+substring(k,1,length(k)), data[k]]]) as data

// create House
MERGE (h:House {}) 
// set attributes
h +=, ['overlord','swornMembers','currentLord','heir','founder','cadetBranches'],[])

// create relationships to people or other houses
FOREACH (id in data.swornMembers | MERGE (o:Person {id:id}) MERGE (o)-[:ALLIED_WITH]->(h))
FOREACH (s in data.seats | MERGE (seat:Seat {name:s}) MERGE (seat)-[:SEAT_OF]->(h))
FOREACH (id in data.cadetBranches | MERGE (b:House {id:id}) MERGE (b)-[:BRANCH_OF]->(h))
FOREACH (id in case data.overlord when null then [] else [data.overlord] end | MERGE (o:House {id:id}) MERGE (h)-[:SWORN_TO]->(o))
FOREACH (id in case data.currentLord when null then [] else [data.currentLord] end | MERGE (o:Person {id:id}) MERGE (h)-[:LED_BY]->(o))
FOREACH (id in case data.founder when null then [] else [data.founder] end | MERGE (o:Person {id:id}) MERGE (h)-[:FOUNDED_BY]->(o))
FOREACH (id in case data.heir when null then [] else [data.heir] end | MERGE (o:Person {id:id}) MERGE (o)-[:HEIR_TO]->(h))
FOREACH (r in case data.region when null then [] else [data.region] end | MERGE (o:Region {name:r}) MERGE (h)-[:IN_REGION]->(o));

After we have imported the dataset our graph will have a schema as shown below. You can always check the schema of your graph using CALL db.schema


Categorical pagerank:

As in my previous blog post we will use the SWORN_TO network of houses to demonstrate categorical pagerank and this time use regions as categories. This way we will try to understand and breakdown from which regions do the houses get their power and support.

We first match all regions so that we will iterate our algorithm through all regions. In the node-statement of cypher projection we project nodes belonging to only a specific region using a parameter. As we already filtered nodes from a specific region we don’t have to filter out any relationships as only relationships with both source and target nodes described in node-statement will be projected and all other relationships that don’t have both the source and target nodes described in node-statement will be ignored in the projection.

We will then save the results as a relationship property between region and house.

MATCH (r:Region)
    MATCH (h:House)-[:IN_REGION]->(r:Region)
    WHERE ="' + +
    '" RETURN id(h) as id
    MATCH (h1:House)-[:SWORN_TO]->(h2:House)
    RETURN id(h1) as source,id(h2) as target',
YIELD nodeId,score
MATCH (h:House) where id(h) = nodeId
MERGE (r)-[p:PAGERANK]->(h)
ON CREATE SET p.score = score

Lets first examine the North.

MATCH (r:Region{name:"The North"})-[p:PAGERANK]->(h)
RETURN as house,p.score as pagerank 

House Bolton leads with House Stark following in second place. This might be disheartening to some fans as Starks is more lovable than Boltons, but we all know how things ended for Boltons in the TV series.


Westerlands region is the home of the Lannister House. Lets see how well they do in their home region.

MATCH (r:Region{name:"The Westerlands"})-[p:PAGERANK]->(h)
RETURN as house,p.score as pagerank

Lannisters have a very strong direct support in their home region. This is shown in that House Farman is the only other house in Westerlands that has the support of at least one house.



Second version:

As I was writing the blog post and running the above algorithm I thought to myself that even though a house might not be in a specific region it might still have support from a house in that region and hence support from that region.

For that reason I turned our projection of the graph to be analyzed around a bit and now we project all the houses of our graph and filter SWORN_TO relationships that have the source node based in a specific region only. This directly translates to support from a house in that region.

We filter out pagerank scores below 0.151 as 0.15 is the default value for a node with no inbound relationships and save results as a relationship between a region and a house. This way we keep our graph tidy.

MATCH (r:Region)
    MATCH (h:House)
    RETURN id(h) as id
    MATCH (r:Region)<-[:IN_REGION]-(h1:House)-[:SWORN_TO]->(h2:House)
    WHERE ="' + +
    '" RETURN id(h1) as source,id(h2) as target',
YIELD nodeId,score
WITH nodeId,score,r where score > 0.151
MATCH (h:House) where id(h) = nodeId
MERGE (r)-[p:SUPPORT]->(h)
ON CREATE SET p.score = score

As we get back only 51 created relationships we can easily visualize this network in Neo4j Browser. It is pretty obvious that House Baratheon of King’s Landing has the support from most regions lacking only The Neck and Beyond the Wall region support.


Check top 20 individual regional pagerank scores.

MATCH (h:House)<-[s:SUPPORT]-(r:Region)
RETURN as region, as house,s.score as score 

Both Baratheon houses are very dominant in the Crownlands region. House Tyrell comes in third in regional pagerank score from The Reach region. House Tyrell is sworn to House Baratheon of King’s Landing and solely because of this relationship House Baratheon comes in immediately after house Tyrell by support from the Reach region. This is a pattern occurring through most of the graph except for the North Region, where House Baratheon comes in before the Starks and Boltons having support from both of them.



With cypher projections we get all the freedom cypher query language provides. We can even parametrize graph algorithms to run on only specific subgraphs as shown in this blog post. Cypher projections are a very powerful tool that can be used to extract useful insights from our graphs and if you are familiar with cypher also quite easy to use.

Neo4j GoT social graph analysis

Lately I have been showing how to project a bi-partite graph to mono-partite and run algorithms on it. To mix it up a little I will demonstrate how to project a network between nodes using cosine similarity of certain features that the node possess.  We will be using the Network of thrones dataset based on A Storm of Swords book. Check out also their analysis on the same network.

To find communities of people, that are in a similar position of power in the network, we will first calculate pagerank, harmonic centrality, triangles count and cluster coefficient for each node and use those values as features from which we will infer a similarity network using cosine similarity. This will allow us to find communities of people who are in a similar position of power.


Graph model:


William Lyon has also wrote a blog post how to import and analyse this network. I have stolen both the graph model picture aswell as the cypher query for importing the data into Neo4j.

"" AS row
MERGE (src:Character {name: row.Source})
MERGE (tgt:Character {name: row.Target})
MERGE (src)-[r:INTERACTS]->(tgt)
ON CREATE SET r.weight = toInt(row.Weight)


In graph theory and network analysis, indicators of centrality identify the most important nodes within a graph.

Given this assumption we will use centrality indicators as features for the k-nearest neighbors algorithm (k-NN) that will infer a similarity network. As the centralities are not the focus of this blog post I will skip the theory and just run the write back versions of algorithms.

Calculate and write back triangles count and clustering coefficient.

CALL algo.triangleCount('Character', 'INTERACTS',
{write:true, writeProperty:'triangles',clusteringCoefficientProperty:'coefficient'}) 
YIELD nodeCount, triangleCount, averageClusteringCoefficient;

Calculate and write back pageRank score

CALL algo.pageRank(
 'MATCH (c:Character) RETURN id(c) as id',
 'MATCH (p1:Character)-[:INTERACTS]-(p2:Character) 
  RETURN id(p1) as source, id(p2) as target',

Calculate and write back harmonic centrality

CALL algo.closeness.harmonic(
  'MATCH (c:Character) RETURN id(c) as id',
  'MATCH (c1:Character)-[:INTERACTS]-(c2:Character) 
   RETURN id(c1) as source, id(c2) as target',
{graph:'cypher', writeProperty: 'harmonic'});


First we will normalize our features using the min-max method. Cluster coefficient is already normalized so there is no need to do it again. If you want to do something similar on bigger graphs I would suggest you use apoc.periodic.iterate  for batching.

WITH ["pagerank","harmonic","triangles"] as keys
UNWIND keys as key
MATCH (c:Character)
WITH max(c[key]) as max,min(c[key]) as min,key
MATCH (c1:Character)
WITH c1, key + "normalized" AS newKey, 
    (toFloat(c1[key]) - min) / (max - min) as normalized_value
CALL apoc.create.setProperty(c1, newKey, normalized_value) 
YIELD node

First we match all pairs of nodes and compare their cosine similarity. To avoid a complete graph where all nodes are connected between each other, we will set a similarity threshold, meaning that all relationships with cosine similarity less than 0.9 will be ignored and not stored. I am slightly cheating as with the kNN algorithm you define to how many nearest neighbours should the node be connected to and here we are defining how similar or close should the pair of nodes be to store a relationship.

Again for bigger graphs you should use APOC for batching. I wrote a blog post with a very similar example.

MATCH (c1:Character),(c2:Character) where id(c1) < id(c2)
WITH c1,c2,apoc.algo.cosineSimilarity([c1.pageranknormalized,
                                         as cosine_similarity
WHERE cosine_similarity > 0.9
SET s.cosine = cosine_similarity

Community Detection

A network is said to have community structure if the nodes of the network can be easily grouped into sets of nodes such that each set of nodes is densely connected internally. In the particular case of non-overlapping community finding, this implies that the network divides naturally into communities of nodes with dense connections internally and sparser connections between communities.


Let’s check what community structure will Louvain algorithm find in our network.

YIELD nodeId, community
MATCH (c:Character) where id(c) = nodeId
RETURN community,
       count(*) as communitySize,
       collect( as members 
ORDER BY communitySize ASC LIMIT 20;

Illyrio Mopatis is all alone in Pentos and probably has no network power at all. The most interesting group is community 8, where all the cream of the book is collected ranging from Starks, Targaryens and Lannisters to interestingly also Stannis, Mance and Davos.

Community 106 looks like a community of captains and maesters and differs from largest community in that the largest community has a higher average cluster coefficient.



Let’s try the Label Propagation algorithm and check what it finds.

CALL algo.labelPropagation(
'MATCH (c:Character) RETURN id(c) as id, 1 as weight, id(c) as value',
'MATCH (c1:Character)-[f:INTERACTS]->(c2:Character) 
RETURN id(c1) as source, id(c2) as target,f.weight as weight',


We can immediately observe that LPA returns much more granular communities than Louvain in this network. The biggest community consists of Starks and Lannisters with an addition of Varys. It’s safe to say he deserves this spot.  On the other hand Jon is in the community with all members from Night Watch. Daenerys is also left out of the “strongest” community and shares community with Jorah and ser Barristan. She just wasn’t such a badass in season 3 as she has became in season 7 🙂


Neo4j Marvel Social Graph Algorithms Centralities

To top off the Marvel Social Graph series we will look at how to use centralities on a projected graph via cypher queries to find influencers or otherwise important nodes in our network using Neo4j and neo4j-graph-algorithms library.

To recap the series:

Graph projections via cypher queries:

As we noticed in the previous part using graph projections via cypher queries or for short “cypher loading”  is really great as it lets us filter and/or project virtual graphs easily and quickly. To let you fully take advantage of this awesome tool we need to get to know exactly how it works.

Unlike default label and relationship type of loading subsets of graphs, where we can in some cases define direction of the relationship to be either “incoming”,”outgoing” or “both”(birected/undirected) , cypher loading does not support loading single relationship as undirected.

While this may seem bad it’s actually not as cypher loading allows us to get creative with trying out graph algorithms on different virtual networks that we can project using cypher queries. We already did this in the previous post, but I didn’t describe it in detail yet.

Imagine that we have two hero nodes and a single directed relationship between them.
Only difference between loading this graph as undirected or directed is specifying the direction of the relationship in the cypher query. When we do not specify the direction of the relationship in the cypher query, cypher engine returns two patterns in both direction for each relationship in our graph. That in turn projects our network bidirected or undirected.

projecting directed network:

MATCH (u1:Hero)-[rel:KNOWS]->(u2:Hero)
RETURN id(u1) as source, id(u2) as target

projecting undirected network:

MATCH (u1:Hero)-[rel:KNOWS]-(u2:Hero)
RETURN id(u1) as source, id(u2) as target


In graph theory and network analysis, indicators of centrality identify the most important nodes within a graph. Applications include identifying the most influential person(s) in a social network, key infrastructure nodes in the Internet or urban networks, and super-spreaders of disease. Centrality concepts were first developed in social network analysis, and many of the terms used to measure centrality reflect their sociological origin.[1]


PageRank is Google’s popular search algorithm. PageRank works by counting the number and quality of links to a page to determine a rough estimate of how important the node is. The underlying assumption is that more important nodes are likely to receive more links from other websites.

More in documentation.

We will use cypher loading to load only the nodes of the biggest component and set a weight threshold of 100 for relationships.

// Match only the biggest component 
'MATCH (u:Hero) WHERE u.component = 136 RETURN id(u) as id
MATCH (u1:Hero)-[k:KNOWS]-(u2:Hero)
// Similarity threshold 
WHERE k.weight >= 100
RETURN id(u1) as source, id(u2) as target
) yield node,score 
WITH node,score ORDER BY score DESC limit 10 
return as name, score;

Captain America has the highest pagerank score. He is located in the middle of the network with a total of 24 relations and also relations to most of the other important heroes in the network like Thor, Spiderman and Iron Man. If we check all heroes related to Captain America, we can notice, that they have on average higher Pagerank score just because of this relation to Captain America.


* Node color from white (less) to red (more): Pagerank

Closeness centrality

Closeness centrality is defined as the total number of links separating a node from all others along the shortest possible paths. In other words, to calculate closeness, one begins by calculating, for each pair of nodes in the network, the length of the shortest path from one to the other (aka the geodesic distance). Then for each node, one sums up the total distance from the node to all other nodes.[2]

Closeness can be interpreted as an index of time-until-arrival of something flowing through the network. The greater the raw closeness score, the greater the time it takes on average for information originating at random points in the network to arrive at the node. Equally, one can interpret closeness as the potential ability of a node to reach all other nodes as quickly as possible.[2]

More in documentation.

We will use cypher loading to load only the nodes of the biggest component and set a weight threshold of 100 for relationships. With closeness centrality it is especially important that we load only a single component.

Unfortunately, when the graph is unconnected, closeness centrality appears to be useless because the distance between two nodes belonging to different components is infinite by convention, which makes the sum in 2 infinite too and therefore its inverse equal to zero. For every node of such a graph, there is always another node belonging to another component: indices of all vertices of the graph are therefore useless and the calculation of the index is limited to the largest component, omitting the roles played by individuals of other components.[3]

// Match only the biggest component 
'MATCH (u:Hero) WHERE u.component = 136 RETURN id(u) as id
MATCH (u1:Hero)-[k:KNOWS]-(u2:Hero) 
// Similarity threshold 
WHERE k.weight >= 100
RETURN id(u1) as source,id(u2) as target
',{graph:'cypher'}) YIELD nodeId, centrality
WITH nodeId,centrality 
ORDER BY centrality DESC LIMIT 10
MATCH (h:Hero) where id(h)=nodeId
RETURN as hero, centrality

Captain America is in such a privileged position, that he will be leading in all categories of centralities. We can observe that nodes in more tight communities have higher closeness centrality indexes while those on the brinks and less connected have smaller values. Second thing we can notice is that also the overall position of nodes in the graph matter as the middle community has on average higher closeness centrality as others. As an example both Iron Man and Vision have higher closeness centrality than Spiderman, while Spiderman has higher Pagerank index than them.


* Node color from white (less) to red (more): Closeness centrality

Harmonic Centrality

The harmonic mean has been known since the time of Pythagoras and Plato as the mean expressing “harmonious and tuneful ratios”, and later has been employed by musicians to formalize the diatonic scale, and by architects as paradigm for beautiful proportions.[4]

Social network analysis is a rapid expanding interdisciplinary field, growing from work of sociologists, physicists, historians, mathematicians, political scientists, etc. Some methods have been commonly accepted in spite of defects, perhaps because of the rareness of synthetic work like (Freeman, 1978; Faust & Wasserman, 1992). Harmonic centrality was proposed as an alternative index of closeness centrality defined on undirected networks. Results show its computation on real cases are identical to those of the closeness centrality index, with same computational complexity and we give some interpretations. An important property is its use in the case of unconnected networks.[3]

// Match only the biggest component 
'MATCH (u:Hero) WHERE u.component = 136 RETURN id(u) as id 
MATCH (u1:Hero)-[k:KNOWS]-(u2:Hero) 
// Similarity threshold 
WHERE k.weight >= 100 
RETURN id(u1) as source,id(u2) as target '
,{graph:'cypher'}) YIELD nodeId, centrality 
WITH nodeId,centrality 
ORDER BY centrality DESC LIMIT 10 
MATCH (h:Hero) where id(h)=nodeId 
RETURN as hero, centrality

Harmonic centrality was proposed as an alternative for closeness centrality to help solve the problem of disconnected components. Because of this we get back very similar results, given that we also have a single connected component.


Betweenness Centrality

In graph theory, betweenness centrality is a measure of centrality in a graph based on shortest paths. For every pair of nodes in a connected graph, there exists at least one shortest path between the vertices such that either the number of relationships that the path passes through (for unweighted graphs) or the sum of the weights of the edges (for weighted graphs) is minimized. The betweenness centrality for each node is the number of these shortest paths that pass through the node.[6]

More in documentation.

We will use cypher loading to load only the nodes of the biggest component and set a weight threshold of 100 for relationships.

// Match only the biggest component
'MATCH (u:Hero) WHERE u.component = 136 RETURN id(u) as id
MATCH (u1:Hero)-[k:KNOWS]-(u2:Hero) 
// Similarity threshold
WHERE k.weight >= 100
RETURN id(u1) as source,id(u2) as target
',{graph:'cypher'}) YIELD nodeId, centrality
WITH nodeId,centrality 
ORDER BY centrality DESC LIMIT 10
MATCH (h:Hero) where id(h)=nodeId
RETURN as hero, centrality

As always Captain America is in first place and this time Beast being in the second place. This comes as no surprise as we can observe that he is the sole bridge between middle and right community. Spiderman and Incredible Hulk play a similar role as Beast, but have smaller communities behind them and hence also smaller betweenness scores.


* Node color from white (less) to red (more): Betweenness centrality







Neo4j Marvel Social Graph Algorithms Community Detection

In the first part we infered a Hero to Hero network from a bi-partite graph of heroes and comics. Following was the second part where we got to know some basic network information to help us get a sense of what kind of a network we are dealing with.

In this part I will continue on finding resilient communities within our network using Louvain method and Label Propagation algorithm.

* Visualizations are made with Gephi. Check my previous post Neo4j to Gephi for more information. Another option is to use neovis.js to visualize communities.

Graph projection

Neo4j graph algorithms support two ways of loading subset of the graph, as a virtual graph to quickly run the algorithms on.  First one is known as label and relationship-type loading, where we load nodes by labels and relationships by their types.

What if we wanted to run algorithms on very specific subsets of graphs, but labels and relationship types are not descriptive enough or we do not want to update our actual graph?

Luckily we can load or project subsets of our graph using Cypher statements. Use a cypher query to fetch nodes instead of the label parameter and a second cypher query for fetching relationships instead of the relationship-type parameter.

Use graph:'cypher' in the config.


CALL algo.unionFind(
//First cypher query is used to fetch nodes
    'MATCH (p:User)
     WHERE = 'import'
    RETURN id(p) as id',
//Second cypher query is used to fetch relationships
    'MATCH (p1:User)-[f:FRIEND]->(p2:User) 
     RETURN id(p1) as source, id(p2) as target,f.weight as weight',

Projecting and loading graphs via cypher queries allows us to describe the graph we want to run algorithms on in great detail. Not only that, but we can also use it to project virtual graphs from indirect patterns or omit some relationships to be loaded without actually deleting them.

Cypher projection use cases:
  • Filtering nodes and relationships.
  • Loading indirect relationships.
  • Projecting bidirected graph. Example
  • Similarity threshold. (discussed here)

Community detection

In the study of networks, such as computer and information networks, social networks and biological networks, a number of different characteristics have been found to occur commonly, including the small-world property, heavy-tailed degree distributions, and clustering, among others. Another common characteristic of networks is community structure, which is the pattern of connections and groupings. The connections within real-world networks are not homogenous or random which suggests certain natural divisions exist.[1]

Community detection algorithms do not perform well in a very connected graph as most of the nodes are densely connected, hence they belong to the same community. This usually leads to poor results where we end up with one big community that stretches over most of the graph and some small communities.

We introduce similarity threshold concept, where the weight of the relationship has to be above certain value or the relationship is ignored. We can easily exclude these relationships using graph projections via cypher queries.

In this post we will set the weight threshold to 100 so the resulting communities should be tightly-knit and resilient.

Connected components

Connected Components or UnionFind algorithm basically finds sets of connected nodes also known as islands where each node is reachable from any other node in the same set.
In graph theory, a connected component of an undirected graph is a subgraph in which any two nodes are connected to each other by paths, and which is connected to no additional nodes in the graph. More in documentation.

As with any new network I come across I want to first know how many connected components exist in the network and what is their size. Because we will set the weight threshold to 100 we will get back much less connected (sparser) graph than in previous post where threshold 10 was used.

We use the default label and relationship type loading in this example, where we load all nodes labeled “Hero” and relationships of type “KNOWS”. We also set threshold to 100, which means that only relationships with weight greater than 100 are considered by the algorithm.

CALL'Hero', 'KNOWS',
{weightProperty:'weight', defaultValue:0.0, threshold:100.0}) 
YIELD nodeId,setId
RETURN setId as component,count(*) as componentSize
ORDER BY componentSize DESC LIMIT 10;

m_ccAs expected our graph is sparse with one big component, that has 101 members and 6 small components with 2-4 members. 116 out of total 6439 heroes have at least 1 relationship with weight more than 100.

If we visualize the largest component of 101 nodes in Neo4j Browser, we can easily observe that there are some intuitive communities hidden here and some bridge nodes between those communities. We will try to define community structure of this subgraph with the help of Louvain method and Label Propagation algorithm.



Communities are groups of nodes within a network that are more densely connected to one another than to other nodes. Modularity is a metric that quantifies the quality of an assignment of nodes to communities by evaluating how much more densely connected the nodes within a community are compared to how connected they would be, on average, in a suitably defined random network. The Louvain method of community detection is an algorithm for detecting communities in networks that relies upon a heuristic for maximizing the modularity.[2]

As mentioned we will use graph projecting via cypher queries to load only relationships with weight 100 or more.

// load nodes
    'MATCH (u:Hero) RETURN id(u) as id', 
// load relationships
    'MATCH (u1:Hero)-[rel:KNOWS]-(u2:Hero) 
// similarity threshold 
    WHERE rel.weight > 100
    RETURN id(u1) as source,id(u2) as target',
YIELD nodeId,community
MATCH (n:Hero) WHERE id(n)=nodeId
RETURN community,
       count(*) as communitySize, 
       collect( as members 
order by communitySize desc limit 5

I use Gephi for visualizing communities as it is more pleasant and insightful to look at good visualizations instead of tables.

*Node color: Louvain community, node size: Pagerank, name size: Betweenness centrality


I am not an expert in Marvel domain, so I will just give a brief explanation of the results. We get a total of 8 communities. The largest and also the best positioned community is the purple. It consists mostly of team Avengers and S.H.I.E.L.D with Captain America being the leader. On the left we can find Mr. Fantastic and Fantastic Four team in the same purple community . Light-blue is the Spiderman team, with Spiderman being their only connection to the outside world. They keep to themselves and don’t mingle with others. Dark blue are the Asgardians who also keep to themselves and communicate only through Thor to outside world. Incredible Hulk Series has it’s own community and again Hulk being the only connection to outside world. We observe that Beast is in a unique position as a bridge between purple and green community, which is the X-Men community.

Label propagation algorithm

Label Propagation algorithm which was first proposed by Raghavan et al. (2007) uses unique identifiers of nodes as labels and propagates the labels based on an agreement with the majority of the neighbour nodes and each node selects a label from its neighbourhood to adopt it as its label. LPA works as follows: Node x has neighbours and each neighbour carries a label denoting the community to which they belong to. Each node in the network chooses to join the community to which the maximum number of its neighbours belongs to, with ties broken uniformly and randomly. At the beginning, every node is initialized with unique label (called as identifier) and the labels propagate through the network. At every step of propagation, each node updates its label based on the labels of its neighbours. As labels propagate, densely connected groups of nodes quickly reach a consensus on a unique label.[3]

More in documentation.

Similar to Louvain algorithm we will use graph projecting via cypher queries to load only relationships that have weight more than 100. I used the write back version, so that I could export the results to Gephi for visualization.

CALL algo.labelPropagation(
// supports node-weights and defining 
// initial communities using parameter value
    'MATCH (u:Hero) RETURN id(u) as id, 1 as weight,id(u) as value',
// load relationships 
    'MATCH (u1:Hero)-[rel:KNOWS]-(u2:Hero) 
// Similarity threshold
    WHERE rel.weight > 100
    RETURN id(u1) as source,id(u2) as target, rel.weight as weight',
'OUT',{graph:"cypher",partitionProperty:"lpa" }) 
YIELD computeMillis

We get back 21 communities with some single node communities.
Team Avengers(purple) and Fantastic Four(light blue) get split up into two separate communities. Spiderman(green), Incredible Hulk(turquoise) and Asgardians(red) communities are same as in Louvain results. We observe that X-Men faction also gets split up into two communities and Cannonball group is slightly bigger this time and not so isolated.

*Node color: LPA community, node size: Pagerank, name size: Betweenness centrality


Hope you have noticed by now that neo4j graph algorithms plugin is really awesome and easy to use. Combine that with projecting virtual graphs via cypher queries feature and you get an easy and efficient way to analyse and understand your graph.

Firstly I was going to show how to run centrality algorithms in this blog post also, but decided not to as it would be way too long, so I will shorty post another blog post with examples of centralities and cypher projections.

Stay tuned!





Neo4j Marvel Social Graph Analysis

This is part 2 of Marvel series. In  previous post  I imported the data from kaggle competition and showed how to project a mono-partite graph from a bipartite graph. We used a topological similarity measure, that takes into account how many common comics any pair of heroes have.

For easier understanding we can represent this as the following function:



(:Hero)←[:KNOWS{weight=Nb of Common Comics}]→(:Hero)

To find out more check this Gephi tutorial. We could have also projected mono-partite graph looking like:

(:Comic)←[{weight=Nb of Common Heroes}]→(:Comic)

You will need to have the graph imported to proceed with the following steps in analysis.


Graph model:


We end up with a simple graph model. We started with a bi-partite(two types/labels of nodes) network of Heroes and Comics and then inferred a mono-partite (single type/label of nodes) graph amongst Heroes based on the number of common Comics.


We will analyse the inferred network of Heroes. I usually start with some global statistics to get a sense of the whole graph and then dive deeper.

Weight distribution

First we check the distribution of similarity weights between pairs of heroes. Weight value translates to the number of common Comics Heroes have appeared in.

MATCH ()-[k:KNOWS]->()
RETURN (k.weight / 10) * 10 as weight,count(*) as count 

You might wonder why we use (k.weight / 10) * 10 as it looks silly at first glance. If we divide an integer by an integer in Neo4j we get back integer. I use it as an bucketizing function,that groups numeric data into bins of 10, so that it is easier to read results.


162489 out of total 171644 relationships (94%) in our network has weight 9 or less. This means that most of our Heroes have only briefly met.

Largest weight is between “THING/BENJAMIN J. GR” and “HUMAN TORCH/JOHNNY S” at value 724. I would assume they are good buddies.

Even though we have a well connected graph, most of relationships are “weak” judging by weight. I would assume that most comics have standard teams of heroes, where not necessarily all of the team appear in every comic.

Second assumption I would make is that there are occasinal comics where different “teams” of heroes appear in together, hence so many weak relationships.

To check my assumptions I start with this query to get a basic feel.

MATCH (u:Comic)
RETURN avg(,'APPEARED_IN')) as average_heroes,
stdev(,'APPEARED_IN')) as stdev_heroes,
max(,'APPEARED_IN')) as max_heroes,
min(,'APPEARED_IN')) as min_heroes


I personally prefer distribution over average. We use the “bucketizing” function as before:

MATCH (u:Comic)
RETURN (size((u)-[:APPEARED_IN]-()) / 10) * 10 as heroCount,
count(*) as times
ORDER BY times DESC limit 20

Looks like my assumptions are plausible and quite possible. 8999 (71%) of comics have less than 10 heroes playing, with most of them probably at around 5 or less as the total average is 7,5. There are a few comics where we have “family gatherings” with more than 30 heroes. One of them is a comic named COC1, probably Contest of Champions, where 110 heroes play.

Normalize weight:

If we need to, we can normalize the weight using min-max method.
Notice how this time we use (toFloat(k1.weight) - min) / (max - min). By declaring k.weight a float, we are dividing a float by an integer, that returns a float and does not bucketize/group number into bins

MATCH (:Hero)-[k:KNOWS]->(:Hero) 
//get the the max and min value
WITH max(k.weight) as max,min(k.weight) as min
MATCH (:Hero)-[k1:KNOWS]->(:Hero) 
SET k1.weight_minmax = (toFloat(k1.weight) - min) / (max - min)

Triangle count / Clustering coefficient

In graph theory, a clustering coefficient is a measure of the degree to which nodes in a graph tend to cluster together. Evidence suggests that in most real-world networks, and in particular social networks, nodes tend to create tightly knit groups characterised by a relatively high density of ties; this likelihood tends to be greater than the average probability of a tie randomly established between two nodes (Holland and Leinhardt, 1971;[1] Watts and Strogatz, 1998[2]).

Two versions of this measure exist: the global and the local. The global version was designed to give an overall indication of the clustering in the network, whereas the local gives an indication of the embeddedness of single nodes.[1]

CALL algo.triangleCount('Hero', 'KNOWS',
{write:true, writeProperty:'triangles',
YIELD nodeCount, triangleCount, averageClusteringCoefficient;

Running this algorithms writes back local triangle count and clustering coefficient, while providing total triangle count and average clustering coefficient of the graph as return.
Find out more in documentation.


Average clustering is very high at 0.77, considering that 1 would mean we have a complete graph, where everybody knows each other. This comes as no surprise as we observed before that our network is very connected with most of relationships being “weak”.

Connected Components

Testing whether a graph is connected is an essential preprocessing step for every graph algorithm. Such tests can be performed so quickly and easily that you should always verify that your input graph is connected, even when you know it has to be. Subtle, difficult-to-detect bugs can result when your algorithm on a disconnected graph.


Connected components have other practical use cases, for example, if we are analysing a social network and we want to find all the disconnected groups of people that exist in our graph.

More in documentation.

CALL'Hero', 'KNOWS', {}) 
YIELD nodeId,setId
WITH setId,count(*) as communitySize
// filter our single node communities
WHERE communitySize > 1 
RETURN setId,communitySize
ORDER BY communitySize DESC LIMIT 20


Our graph consists of 22 total components with the largest covering almost all the graph(99,5%). There is 18 single node communities, and 3 very small ones with 9, 7 and 2 members.

Lets check who are the members of these small components.

CALL'Hero', 'KNOWS', {}) 
YIELD nodeId,setId
WITH setId,collect(nodeId) as membersId,count(*) as communitySize 
// skip the largest component
ORDER BY communitySize DESC SKIP 1 LIMIT 20
MATCH (h:Hero) WHERE id(h) in membersId
RETURN setId,collect( as members 
ORDER BY length(members) DESC

We need to match nodes by their nodeId from result so that we can get back the names of heroes.


18 heroes never appeared in a comic with any other heroes. Some of the are RED WOLF II, RUNE, DEATHCHARGE etc. Second largest component with 9 members seems to be the ASHER family and some of their friends.

Weighted Connected Components

What if decided that two heroes co-appearing in a single comic is not enough of interaction for us to tag them as colleagues and raise the bar to 10 common comics.Here is where weighted Connected Component algorithm can help us.

If we define the property that holds the weight(weightProperty) and the threshold,it means the nodes are only connected, if the threshold on the weight of the relationship is high enough otherwise the relationship is thrown away.

In our case it means that two heroes need at least 10 common comics to be considered connected.

CALL'Hero', 'KNOWS', 
{weightProperty:'weight', threshold:10.0}) 
YIELD nodeId,setId
RETURN setId,count(*) as communitySize
ORDER BY communitySize DESC LIMIT 20;

Define weightProperty and threshold.


As we could expect the biggest component drops significantly in size from 99,5% to 18,6% of total number of nodes. Using threshold can be used to find potentially resilient teams of heroes as more interactions between heroes (we take any co-appearance as positive) means longer history and more bonding, which can translate to resilience.

We can play around with different threshold settings.

Not only community sizes are interesting, but also their members. We check the members of the largest communities skipping the first one as showing 1200 names in a browser can be messy.

CALL'Hero', 'KNOWS', 
{weightProperty:'weight', defaultValue:0.0, threshold:10.0}) 
YIELD nodeId,setId
WITH setId,collect(nodeId) as membersId,count(*) as communitySize 
// skip the largest component
ORDER BY communitySize DESC SKIP 1 LIMIT 20
MATCH (h:Hero) WHERE id(h) in membersId
RETURN setId,collect( as members 
ORDER BY length(members) DESC


Second largest community has really cool names like Bodybag,China Doll and Scatter Brain. As I have no clue of the domain I can’t really give any other comment.



While this post is long, it does not contain everything I want to show on Marvel graph, so there will be one more post using centralities with cypher loading and maybe some community detection with Louvain and Label Propagation Algorithm.

Hope you learned something today!



Neo4j London tube system analysis

I recently came across London tube dataset, that was uploaded by Nicola Greco. I thought it would a cool example to show some algorithms from the new Neo4j graph algorithms plugin. They are really useful as they allow network analysis without using external services to run the algorithms on.


Graph model:

We will create a very simple graph model, with stations and connections between them as shown below. You can check the always check the schema of your graph with db.schema() procedure.


Lets define the schema for our graph model with a constraint and an index, so that all queries will run faster.

CREATE CONSTRAINT ON (s:Station) ASSERT is unique;

CREATE INDEX ON :Station(name);


LOAD CSV can import data from local files or from the internet, which is really cool, so that we can access data on github easily without the need of any intermediate steps.

Import stations

"" as row
MERGE (s:Station{})

Import connections between stations.
We create relationships in both directions as trains also travel in both directions. We save the time spent traveling between stations as a relationship property, so that we can use it as a weight in our algorithms.

"" as row
MATCH (s1:Station{id:row.station1})
MATCH (s2:Station{id:row.station2})
MERGE (s1)-[:CONNECTION{time:row.time,line:row.line}]->(s2)
MERGE (s1)<-[:CONNECTION{time:row.time,line:row.line}]-(s2)


Which stations have the most connections to other stations?

MATCH (n:Station)--(n1)
RETURN as station,
       count(distinct(n1)) as connections 
order by connections desc LIMIT 15



Find the fastest path based between two stations.

algo.shortestPath is a dijsktra algorithm, that returns shortest path between start and end node. For more info check the documentation.

transfers are not taken into account

MATCH (start:Station{name:"Baker Street"}),(end:Station{name:"Beckton Park"})
CALL, end, 'time')
YIELD nodeId, cost
MATCH (s:Station) where id(s)=nodeId
RETURN as station,cost as time



Find out which stations take the longest to get to from Baker Street station.

algo.shortestPaths is a dijsktra algorithm, that return shortest paths from start node to all the other nodes in the network.

transfers are not taken into account

MATCH (n:Station {name:'Baker Street'})
CALL, 'time')
YIELD nodeId, distance
MATCH (s:Station) where id(s)=nodeId
RETURN as station,distance as time 
ORDER BY time desc LIMIT 15



Find the longest shortest paths between stations in the network.

algo.allShortestPaths returns shortest paths between all pairs of nodes in the network

transfers are not taken into account

YIELD sourceNodeId, targetNodeId, distance
WITH sourceNodeId, targetNodeId, distance 
ORDER BY distance desc LIMIT 20
// We filter out duplicates
WHERE sourceNodeId > targetNodeId
MATCH (s1:Station),(s2:Station) 
WHERE id(s1)=sourceNodeId AND id(s2)=targetNodeId
RETURN as station1, as station2,distance as time



Find bridge stations in network.

We can use betweenness centrality to help us identify, which are the stations, where people will be passing by the most, based on the structure of the network.

algo.betweenness is a betweenness algorithm, that helps us find bridges in the network. For more info check documentation.

YIELD nodeId, centrality
MATCH (n:Station) where id(n)=nodeId
ORDER BY centrality desc




It is very easy to enhance Neo4j with graph algorithms. You just copy the .jar file to /plugins folder, restart Neo4j and you are good to go. They offer lots of cool algorithms, that can help you analyse your graph in Neo4j. You should try them out!