GraphStuff.FM: The Neo4j Graph Database Developer Podcast

Will It Graph? Identifying A Good Fit For Graph Databases

Episode Summary

How do you know when the application you're building is a good fit for a graph database? How do graph databases work under the hood and how is this different from relational databases? What use cases are NOT a good fit for a graph database? Join Lju and Will as we answer these questions, exploring how the graph native architecture of Neo4j lends itself to solving graphy problems and how to identify a graph-shaped problem.

Episode Notes

00:00 - 02:12 Introduction

02:12 - 09:38 Comparing Graph Native & RDBMS

09:38 - 29:50 What are good and poor fits for graph native?

29:50 - 41:08 Graph native as a general purpose database

Episode Transcription

Lju Lazarevic (00:00):

Hello everybody. And welcome to GraphStuff. Your one stop shop for all things graphy, from a developer's perspective, we are your hosts, Lju Lazarevic and William Lyon. And in this episode, we're going to be discussing what is a good graph fit for transactional systems. Firstly, we're going to have look at how a native graph database works under the hood. And based on this, we're going to explore how this specific architecture lends itself to solving graphy problems as well as where it doesn't work. So well. We're also going to have a look at some unexpected, useful applications of this architecture, such as rapid prototyping. So first of all, how does a traditional relational database management system work?

 

William Lyon (00:43):

Yeah, so in a traditional relational database, which I think is a good starting point since that's a system that a lot of developers are familiar with. So this is what we're all taught in school. When we take our database class or probably a tool you've used at some point in your career as a developer. So this is a good starting point for comparison, but in a traditional relational database, data is stored in tables, right? So in a table we have a well-defined schema that is defining the attributes and their types that exist in the table. And then each row in the table is our sort of discreet entity of data.

 

William Lyon (01:30):

One of these elements in the row, one of the attributes is typically used to define uniqueness. So this is the primary key. This is the identifier that identifies what makes this row unique. This could be something like a unique ID, or maybe if I have something like a social security number for a person, we can treat that as a primary key. We then go through a process called normalization to reduce data repetition. So in normalization, we're moving references to maybe something like an address for a person. We move that into another table. And then we have a reference from the row representing the entity to the row, representing the address for that person.

 

William Lyon (02:18):

Then at query time, when we want to reconstitute this normalized data, but we do what's called a join operation. So in our main entity row, we have the primary key that identifies the ID for the entity. Let's say the person. And then we have, what's called a foreign key that represents a row in our address table. And we use that to go look up the address in the address table. This is called a join and these joins are done at query time at read time. So when I'm doing a join in a relational database, that's a set comparison operation where I'm looking to see where my two sets, in this case the sets are the person table and the address table. I'm looking to see where those two sets overlap and where they overlap based in this case, based on that foreign key, that is my join to reconstitute my normalized data at query time. So at a high level, that's sort of how traditional relational databases work.

 

Lju Lazarevic (03:26):

Yeah, and I guess it's probably worth mentioning as well, why that exists historically. And that will be from a number of reasons. Will be to just reduce the amount of data repetition as we mentioned before. And this is quite a key thing when you think about keeping a version of the truth. So if, for example, somebody changes their name for whatever reason, you wouldn't want multiple versions of that person everywhere. And then you have to try and remember all the different instances of where that person exists. You know you've got one version of that because you've done the normalization process. So you make the updates in one place. And then this whole reconstituting process means that where that's applicable, it'll filter through various setups around that place. So there'll be many reasons why we have this process.

 

Lju Lazarevic (04:15):

So let's have a quick peak at a native graph database and how that works. We spoke about the discrete entity in a relational database being a row within a table. So within a native graph database, that row would be the equivalent of a note. So it's still a discreet entity and very approximately that's how it would look like. And so we still have this sense, this element of normalizations. So, this node would be this entity. And if we were having person nodes, in our previous example, we would have one of those for one person. And we would have some degree of uniqueness in that. So let's go with social security number. The key difference, however, is when we are connecting this person node, and let's say, we're going to have another discrete entity, which is going to be an address. So again, that would be the row out of our address table. When we know that the person has a relationship with address, what we would do is we would create a physical connection between those two points.

 

Lju Lazarevic (05:19):

This would be a join on right. And how this is manifested is we would have a pointer that describes the dress of what node is would have a point of that says, what is the outbound part of the relationship that connects to that node? Would that then have another pointer for the inbound part of the relationship pointing to the other node. So effectively we're collecting a set of pointers, and this is a manifestation of this physical join between those two entities. And that is a big difference.

 

Lju Lazarevic (05:50):

Whereas before how you would reconstitute the data in a relational database would be this joins on read. So at query time, we'd go off and try and figure out how things map together in a graph database, because we did these joins on right? So as soon as we knew these two elements were connected, we put connection in together. At query time, we're not having to do this, look up this mapping. All we're trying to do is say at our node that we've found, we then have a look to see what relationships are connected to this node. And this is something that we call index free adjacency.

 

William Lyon (06:27):

Right? So this concept of index free adjacency, I think is really key to understanding the performance optimizations that native graph database makes relative to the optimizations that other database systems make. So index free adjacency means that during a local graph traversal. So as I'm traversing the graph, following these pointers, following these relationships, connecting nodes in my graph, the performance of that operation is not dependent on the overall size of the graph. It's rather a dependence on the number of relationships that the nodes that I'm traversing have.

 

William Lyon (07:11):

When we talk of a join, being a set comparison operation, where using an index in a relational database to see where those two sets overlap. And that means that the performance of that join operation starts to slow down as my tables get bigger. So in big O notation terms, this is something like logarithmic growth using an index, something like big O log in, whereas traversing relationships in the graph is more of linear growth based on the number of relationships in the nodes that I'm traversing, not the overall size of the graph. So the fundamental query time optimization that graph databases make that give us index free adjacency. So I think from a performance perspective, that is really the most important thing to think about when we think of a graph native.

 

Lju Lazarevic (08:09):

What does this really mean? There's a number of really powerful things in this paradigm shift as it were of how we're storing the data. And I think really key one here is we don't need to hypothesize about how connections are being made between discrete entities, either they exist or they don't exist. And this is really powerful because we don't have to try and predict the number of different joints or traversals would need to make between different elements, which is something we would have to do in a more traditional database system where we're doing these joints on read at query time. And this gives us a number of interesting opportunities. And one of them is it makes it a lot easier for us to query patterns. So yes, we could be looking for a specific stop point. So we're looking for a stop and a specific person based on a specific social security number.

 

Lju Lazarevic (09:05):

But what if we were looking at patterns such as a person node being connected to an address that's then connected to a series of other node labels or relationship types, we can declare a pattern and just search for that pattern. And that's a lot easier in this setup. And again, depending on the kinds of queries we're looking to do. So if we're doing something that would traditionally be very joined heavy in a traditional database system, this is a lot faster in a graph database, just due to the structure of how that data has been set up.

 

William Lyon (09:37):

So that sounds good, but obviously there has to be some trade off, right? We don't sort of get anything for free in the world of databases. And I think building a database is really an exercise in optimizing for different trade-offs. So it's important to understand the trade-offs that you're making when you're optimizing for this idea of graph native index free adjacency performance.

 

William Lyon (10:04):

One of those trade offs is the optimizations for reads versus writes. One way to think of a relationship in the property graph model, so we say that in the property graph model relationships are first-class citizen. They're there they're explicitly part of the data model. One way, I think we can think of relationships when we compare to a relational database is like a materialized join, right? So in a relational database, we're materializing joins at query time. Again, going back to this idea of an index back set comparison operation at query time, I'm taking that performance hit to see where the joins are. Whereas in a graph database, I'm just basically chasing pointers, going to offsets in the file store to find the other nodes that I'm connected to as I traverse the graph. The other side of index free adjacency then is at right time, I have to materialize and store these relationships. So in a graph database, we may have different right performance than we might expect as we would have in a relational database where I'm not materializing these joins at right time.

 

Lju Lazarevic (11:25):

I think another thing to bear in mind as well is some of the listeners may be going well, hang on a minute. Why haven't these been around for forever? Why our graph database is relatively new thing. Another thing to bear in mind as well is the advancement in hardware, graph databases do like to have memory. And in part, the reason why we have these very performance possible systems, which can hold billions upon billions of nodes and relationships and run at fast speed, is because Ram is cheap and plentiful these days. So this is something, whereas previously it was better the limiter and why you have a lot of these setups in your traditional relational databases. Now it's cheap and plentiful, and this allows us to have the power of graph databases and something to bear in mind as well, is they're not always a good fit for everything, and we're going to touch on that later. But before we look at what they may not be the best fit for, let's have a look at what they are a great fit for.

 

William Lyon (12:28):

I think at a high level graph, databases are a good fit when we have the equivalent of lots of joins in our typical workload. We're focusing this episode talking mostly about transactional use cases. So we'll ignore the graph analytic use cases that make sense here, but focusing on use cases where in a traditional application, I would be doing lots and lots of joins with relational database. I think the canonical example that I always think of, and this is one, I think that saw a lot of popularity early on in the adoption of graph databases is this concept of personalized recommendations.

 

William Lyon (13:12):

So this is the case where if I look at a customer who's browsing an e-commerce store and I want to show them personalized recommendations based on their shopping history, based on the item that they're currently looking at. One of the things I can do is traversed my graph of orders and users and products to see people who bought this thing that the customer is currently looking at. What are other things that those users bought and those might be a good recommendation for showing the current user that traversal through the graph is very performance in a graph database. Whereas if I look at my massive order and product database in a relational database, that might be some very expensive joins to do that multi hop traversal.

 

Lju Lazarevic (14:07):

Yeah, and typically those have been overnight batch processes. So that's the way that a lot of retail companies have dealt with that challenge around the slow queries would be where you run this process overnight, you get some data, those are your recommendations. That's what you use the following business day. But obviously the big challenge behind that is you're potentially dealing with stale data. What happens with products go out of stock, et cetera. And this is a real game changer to be able to do this real time. And you adapt according to any promotions you have, or whether you want to do some real time dynamic pricing and so forth.

 

Lju Lazarevic (14:44):

Another really powerful one is infrared detection. So we touched on a little bit about this idea of looking for patterns and a great example of this would be looking for fraud rings. So if you think about retail banking, where typically you would have an account customer who would have number of products with you, so maybe they've got a bank account, maybe they've got a loan with you, credit cards and so forth. And what do you do to look out for warning signs? If somebody's looking to request a loan, how do you know if that's a typical situation? Is that out of the blue?

 

Lju Lazarevic (15:23):

What you can do is start to look for these fraud rings, where you step back and have a look at things such as our details being recycled or social security numbers being recycled. Okay, that's pretty straightforward. But what if we step back a little bit further and we want to understand, for example, whether the same phone number for landline for a house is being used by two different people, where there's nothing unusual there, but if you then keep traversing through the graph, you discover that same phone number as being split across different properties. So that might trigger alarm bells. So there's lots of different things that you can start to piece together by being able to look for certain patterns.

 

Lju Lazarevic (16:03):

So, in the example of the fraud on here, you wouldn't expect to find long connections from a retail customer. You'd expect to find a fairly small sparse star shaped graph. So as soon as you start to see a long line of connections, which you can query as a pattern that brings back something that maybe you need to investigate that further. And you can start to think of that from a real-time perspective as people putting applications in, or they're trying to set up a new account, you can have a look at data you've got in your existing graph compared to the new data coming in to see, is there something that needs to be investigated further.

 

William Lyon (16:42):

Focusing in again, on this concept of near real time query performance. Fraud detection, just like personalized recommendations is a great example where I need to have the results to, "is this a fraudulent transaction? Yes or no?" In something like 10 20 milliseconds, because when someone swipes their credit card, they don't want to be standing there waiting multiple minutes for their credit card transaction to be approved or not, right? There's a very small window of time where the bank, the credit card processing company needs to go through this process of, is this a suspicious transaction? Do I want to flag this for further analysis? Yes or no? So that I think is a really important aspect. Just like in the personalized recommendations use case where I need to be able to serve those personalized recommendations to the user in near real time, as they're browsing the website, they're not going to wait two minutes for me to go fetch and see what are relevant recommendations as they're browsing my product catalog. I need to be able to show those in tens of milliseconds to the user.

 

Lju Lazarevic (17:51):

Absolutely, and another really graphy use case would be network management. So all about the devices you've got in your network, this can be servers, routers, the load balances, any applications you're running, virtual machines, so forth. And what's really useful around this is if you've got an outage happening or you have a series of events happening within your network, that may be triggering an outage. It's not going to be good enough to know that a certain server's going to go out. You want to be able to determine what's going to be the impact on that. So are you effectively going to end up with an outage that impacts many users.

 

Lju Lazarevic (18:36):

So being able to look through your network, which can have a varied level of depths, you're not necessarily going to know how far you need to traverse into your network to find impacted resources. But the fact that you can do this easily at near real time means you can start to trigger any kind of services to deal with that. Another potential advantage as well is you can predict outages. So if you start to see things happening in your network, you can do real-time root cause analysis, and maybe you can avert or redirect resources. So that's another really powerful graphy real time application that would be quite hard to do in a relational database.

 

William Lyon (19:19):

So we've talked through a few specific examples where graph databases make sense. I wonder maybe if we can start to find some general themes to help us identify if we have a graph shaped problem to work with. So if we think of the three examples we've talked about so far, which are personalization, fraud detection, and network management. Maybe we can start to see what are some of the themes that graph really offers an advantage here. So I think at a high level, certainly any time that we're trying to understand how our different entities connected, so where the relationships in the data are just as important as the entities in the data. So in personalized recommendations, this is traversing. That user made an order that contained a product. This product is in another order that this other user made. Those connections in the data are important to the answer, to my question, which is what products would the current user looking at this product to be interested in?

 

William Lyon (20:26):

Another aspect is we don't necessarily know how many connections we're interested in at query time. So this is the concept of a variable length graph traversal. And one example here is in the network management use case, let's say we have a service that goes down and we want to know what are all of the applications that are somehow dependent on this service. There may be nested dependencies in the graph that represents the dependencies for this service and the different applications, maybe through different data providers, maybe dependencies on applications that depend on other services. The final applications and products that may be impacted on my site could be directly connected to the service that's going down, or they may be the dependency of a dependency of this service. And I want to just traverse that piece of the graph connected to this service that is impacted at multiple depths.

 

William Lyon (21:36):

A graph database is going to be really efficient at finding all of those downstream impacted applications where I don't know ahead of time that I want to do to exactly one join, two joins, three joins, but I just want to know all of the children dependencies of this service, for example. So that's another case.

 

William Lyon (21:56):

Another one is this idea of finding the pattern. So in the fraud detection example, I'm looking for suspicious patterns in the graph that might represent a fraud ring. So Lju talking about this idea of sharing pieces of a synthetic identity. So something like if I see multiple accounts sharing, say a social security number or an address, and I see a suspicious transactions connected to that fraud ring, maybe I don't have a specific starting point for my graph traversal. I'm rather looking for the bigger patterns. This is another case where the graph databases can be helpful. Because we're talking about the starting point, it's maybe taking an aside here to talk about how we do find the starting point for our traversal in a graph database.

 

William Lyon (22:49):

So we talked about this idea of index free adjacency in a graph database, which means that we're not using an index at query time to traverse relationships in the graph. Now that doesn't mean though, that we don't use indexes at all in a graph database. There is still a place for using indexes in a graph database, but rather than using them to find nodes that are connected, we typically use an index to find the starting point for the traversal in our graph. So for example, we may create an index on the unique ID for a node. If going back to our person example, if we're using social security number, as the unique identifier for the person node, we may create an index on social security numbers so that when we're looking up an operation, we can quickly find the node with that social security number using an index. But then, once we start traversing out to maybe the address or whatever other pieces of our graph that we're interested in, we're not using the index then to see where those relationships exist.

 

Lju Lazarevic (24:00):

So we've talked a bit about what are good uses for graph databases. I think perhaps now's a good time to talk about situations where the performance may not be as good as you might hope. There's one example I always think about from a colleague of ours, Max & Ozzie, who fit together and he'd do this talk and he'd bring up this picture of the Hollywood actors heights. So you have a list of the really tall actors down to the not so tall actors. And I feel that this is a really great way to talk about the various strengths and weaknesses that different databases have. And if we look at it from a growth perspective, things that are really great fits for graph database will be things around who knows who, so which actors are friends with other actors. If we were thinking about trying to find new co-actors for the latest movie, that's going to be recorded, we could leverage, who's worked with who previously, what works well and then try and generate some recommendations from that.

 

Lju Lazarevic (25:07):

If we wanted to try and figure out what films we should watch based on actors that we've liked again, good fit. So this is all about using those connections, understanding the relationship between that data and so forth. But what this picture is also very good at describing are the things where the performance will be less than great for the query. So you can still run these queries, absolutely. This is absolutely not a problem at all, but the performance isn't going to be the same as if you were looking at a relational database. So if you wanted to ask questions such as what is the average height for the actors? It can be done, but it wouldn't be as performance, or the average salaries. Let's have a quick look at why this is the case. Let's revisit a little bit about what's happening under the hood when we're looking at the native graph database and how did they choose being stalled.

 

Lju Lazarevic (26:03):

So we talked about how we had the note and relationships, and we had pointers that were pointing to where they were connected. And we do that very quick look up. So it's pointer chasing, and also let's look a little bit about where the rest of that data is stored. If you think about what makes a property graph database, we've got the nodes, we've got the relationships. Nodes and relationships can have properties, so we have a property stored as a key value pair. And we also have labels for the nodes and types of the relationships. And what happens is the node and relationships as the stored in Neo4j have a specific asset size. And within that, as well as having the pointers, it would've been the relationship to where the nodes live. For the nodes, they also have addresses to where to find the property keys, the values, the labels, and so forth.

 

Lju Lazarevic (27:02):

So what happens is when you run a query and you want to bring back the properties for a node, as an example, what will happen is the engine will go away. It will go and pick up all of the nodes that are related to that query. And then for each node, it has to go away, and look up in the store, the property that has been requested. So this example here for average Heights, we would bring back all of the notes for the actors for our query, and then for each node who would then go off and do a up in the store to bring back the value for property. And I want to have gathered all of those up. We would then perform the operation.

 

Lju Lazarevic (27:41):

So this is quite different to how a tabular database would work, where all the tablet data that base would need to do is pull up that single table, which would have all of the actors Heights and then run down the column to do that aggregation or calculation required. That's means you are going to have a slower query in comparison to doing that the relational database table. So does this mean we should never do averages on a graph database? Not at all this, just as a reminder about how you're using the database. So let's talk a little bit about using a graph database as a general database.

 

William Lyon (28:23):

That example that you just gave is a really good one for going back to the point we made earlier, that building a database is an exercise in optimizing for different trade-offs. So it's just important to understand what are the trade offs that your database has optimized for. And understanding that can then help you as you're building your application to understand how to address certain performance or modeling issues that may come up. I think that's maybe the most important takeaway for thinking of, does it make sense to use a graph database or a relational database or a document database for certain use cases? And again, it comes down to just understanding what are the trade offs that your database is optimized for and how can you best work with that to best take advantage of the trade-offs that it's optimized for. So how has that then relevant for us when we're talking about using a graph database as a general database?

 

William Lyon (29:22):

I think when we're talking about general purpose database, there's this useful way to think of this as your first database. So when I'm building a new application, at some point, I have to choose my tech stack. So what's my front end framework? What am I going to build for the API layer? What database am I to use? And of course, as our architecture and our application gets more broadly used, grows more complex, I'm introducing more systems. And many, especially enterprise applications, are not backed by a single database. And oftentimes I have many different systems working together, but thinking about general purpose database in terms of this first database. So what is the first database that I'm going to use as my source of truth as I'm building out my application is how I like to think of this idea of the general purpose database.

 

William Lyon (30:16):

And I think one of the most important factors in determining if a certain database is a good fit for my first database is the ecosystem of framework and language integrations that exist for this database. So basically given the tech stack that I have chosen for my application, is there an integration with this database and the technology that I'm interested in using. That I think is going to be by far the most limiting factor, aside from the different performance optimizations and things like that, that my database is making. Basically like how easily can it be integrated into my tech stack. So if we think about this for Neo4j, it's important to understand what ecosystem of integrations exist for Neo4j. So there is quite a good integration system for Neo4j in terms of language drivers that are out there. So what languages can I use to integrate Neo4j into my application?

 

William Lyon (31:21):

These are where the language drivers are used, but then above those language drivers, there are all sorts of tooling at the framework level that make it easier to integrate Neo4j into my architecture as a first database. So these are things like the graphQL integration for Neo4j that make it easier to build graph. QL API is backed by Neo4j the spring data for Neo4j project if I'm using spring. The Django integration for Neo4j if I'm using the Django project for the backend and then connectors and different pieces of tooling, things like the spark connector, the Kafka connector, a JDBC driver for Neo4j as well. So anyway, going back to this idea of the general purpose database or the first database, I think that is really the most important thing to think about first. Just simply do these integrations with my tech stack exist for the database that I'm interested in using. That's where I would first look when I'm choosing what database to use for my first database.

 

Lju Lazarevic (32:29):

So we've identified, we have the means and mechanisms to connect up the database with our framework that using. The next part is "Well, okay, great I can connect up. So why would I consider using a graph database as a general database?" My first database, my project, I think the really powerful thing I find with the graph database is speedy prototyping. And it's brilliant in the sense that one of the best description you can get away with murder, if you want to ask a question of your data in the quickest possible way. So the really powerful thing is you don't have to declare a schema. So you have to think about your data model, and please do not confuse the not having a schema with not needing to have a data model. Absolutely for everything that you look at, you do need to think about what your data model looks like.

 

Lju Lazarevic (33:29):

However, you don't have to go through protractive process to quickly start answering questions. The beauty of a graph database is you think about your data model, and you'd loosely think about how your entities are connected to each other. What is the scope of your domain? And so forth. And now you've got something in a relatively short amount of time that you can start loading data against. So we can do this. We've now got data in. We know that the data model we're using will be less than perfect if we've not spent a huge amount of time, but it's captured our domain. And it helps us understand roughly how our data is structured and connected to each other. And then what we can do is we can start answering questions straight away. We've got our data into the database Neo4j, has this, a lot the term schemer on, right?

 

Lju Lazarevic (34:22):

So it figures out what the schema is as you putting data into the database. And it'll tell you what it looks like based on what you've loaded. And as you start querying, you're getting feedback. You're getting the answers to your questions. You can pull a project together very quickly. And then if you want to start looking at how you can optimize how your data restored, the structure of your data in the database, you can now make changes relatively easily with the data in the database. So you can now refactor that data as it stands without necessarily having to get rid of everything and load it back in again. So this is a really powerful way of being able to rapidly prototype projects, this whole idea of fail fast. You can answer questions quickly, and it gives you the edge of just being able to get going as fast as possible.

 

William Lyon (35:15):

Yeah, I think those are some really good points. I guess the way that I think about it is this idea of intuitiveness in that a graph model is much more closely aligned with how we think of data, and really especially how we work with data in our applications. I spend a lot of time thinking about building API applications, and oftentimes the way that we interact with an API is really in the context of relationships. So going back to the e-commerce example, if I'm a customer placing an order for products, I want to add a product into my order as a customer, as a user of this e-commerce API application, the relationships are really what's important in that interaction. And that's intuitively how we think of the data. So it really makes sense to model store and query that data as a graph, as I'm building out my applications.

 

Lju Lazarevic (36:18):

Absolutely, very intuitive, and it makes that whole process so much faster. So hopefully we've got you all curious and keen to explore this new database paradigm. So there are some options for you to check out and what we would recommend as a starting point is to have a look at the Neo4j sandboxes. So we've got a blank one that if you are feeling adventurous and want to go away and build something based on looking at the various documentations. But for those of you who just want to have a look and have a gentle guided journey, we have examples in there of recommendations for fraud and so forth. And what you'll find in that is pre-cancer data, as well as that through book guides. So it'll show you the data model, it'll show you the queries, and this is a nice way to dip your toe in the water, and just have a look at what's going on.

 

William Lyon (37:23):

One other thing I'd like to mention that may be interesting is Global Graph Celebration Day that is coming up in the next couple of days. Global Graph Celebration Day is April 15th. This is on the anniversary of the founder of graph theories, birthday Euler. It's going to give this as Euler day, perhaps, but anyway, we like to feature interesting things that the global graph community has been working on. So if you go to globalgraphcelebrationday.com, we'll link this in the show notes, please join us for an extended live stream of digging into more of the history of the graphs. Looking at interesting things that the community is working on. We'll also have a Neo4j GraphQL community call to discuss the recent additions to Neo4j GraphQL library, and have some of the engineering team on hands to answer any of your Neo4j GraphQL questions that's come up. So if you're able to join us, please tune into that. So the recording will also be available as well.

 

Lju Lazarevic (38:38):

And last but not least, we have our annual developer conference on the 17th of June. So that's nodes Neo4j Online Developer Expo and Summit. We have something for everyone there, whether you are completely brand new to graphs, to those of you who are experienced and experts, data scientists, and developers alike, please do join in. So keep an eye out on your various Neo4j sources of data. And we'll put a link in the show notes as well. So do check that out again. It's a fantastic extravaganza to see what the community has been up to as well. So with that, thank you very much for joining us today, and we'll catch up again in the next episode.

 

William Lyon (39:25):

Be sure to subscribe, to GraphStuff.FM In your favorite podcast app. And if you have enjoyed today's episode, please consider leaving us a review in Apple podcasts to help others find our episodes. Cheers.