GraphStuff.FM: The Neo4j Graph Database Developer Podcast

Making Invisible Connections Visible with Tim Eastridge

Episode Summary

Our special guest is Tim Eastridge, who specializes in graph data science consulting and founded Eastridge Analytics to help others with their analytics solutions. He wrote the Graph Data Science with Python and Neo4j book, and has worked as a consultant with PRAC (Pandemic Response Accountability Committee) and built knowledge graphs for the private equity space. He is also part of the Neo4j Ninja program and has been selected to present at NODES 2024! To wrap up the episode, we'll cover our favorite tools of the month and highlight events we're planning on attending in the near future.

Episode Notes

Speaker Resources:

Tools of the Month:

Announcements / News:

Episode Transcription

Jennifer Reif: Welcome back, graph enthusiasts, to GraphStuff.FM, a podcast all about graphs and graph-related technologies. I'm your host, Jennifer Reif, and I'm joined today by fellow advocate Jason Koo.

Jason Koo: Hello.

Jennifer Reif: And our guest today is consultant and author Tim Eastridge. Tim is a Neo4j ninja. He's also written a book called Graph Data Science with Python and Neo4j, which I'm still working my way through. He's also a consultant who has worked with PRAC, Pandemic Response Accountability Committee, as well as knowledge graphs for private equity space. So, Tim, welcome, and thanks for joining us today.

Tim Eastridge: Yup. Thanks for having me. Great to be here.

Jennifer Reif: Would you want to start a little bit with just your background and maybe how you got interested or found out about Neo4j?

Tim Eastridge: Sure. Absolutely. I've been a data scientist for about 10 years now, and actually started using Neo4j in 2020. When the pandemic first started, I was on a data science team at Bank of America at the time, and was tasked with finding fraud in the $5 trillion that got pumped through the American banking system in 2020 and 2021. And so, as you can imagine, a lot of money flowed through the bank at the time.

So it was helpful to have a native graph database to stitch together all of the transactions and look for misuse of government funds. So that was my initial foray into the exciting world of graph data science. And since that time, I left to join a contract with the U.S. government to continue that, looking for fraud in the government data. And then I'm also a founder of Eastridge Analytics, where we do freelance data science consulting and specialize in knowledge graphs and Neo4j.

Jennifer Reif: Awesome. Well, that sounds like a really great pandemic project, actually, to get into Neo4j. Right?

Tim Eastridge: Yeah. Yeah. Absolutely.

Jennifer Reif: Good way to spend your lockdown, I suppose.

Tim Eastridge: Yes. Yes.

Jennifer Reif: Go ahead, Jason.

Jason Koo: Tim, yeah, I wanted to ask about the fraud detection. Right? So fraud detection is one of those major use cases that you can do with graphs. I guess, what introduced you to graphs during this fraud thing, or when you first started the project, other people were like, "Okay. We're using graphs right out the gate"? What was that, I guess, initial start of that graph journey from a fraud position?

Tim Eastridge: Sure. Yeah. We already had a license with Neo4j at the time on the team. We were using it for application tracking and monitoring third-party vendors. And so, it was a natural segue to use this for fraud detection, because we already had the licensing, and we were having trouble with latency with our queries on the... Well, we were using a Teradata database, relational database at the time.

And so, trying to do these multiple hop queries to try to see, "Okay. This person sent a large amount of money to this other person, who then wired it out of the country, who shares a common phone number with this person." And some of these complicated multi-hop queries are just much simpler using a tool like Neo4j.

Jason Koo: Nice. Thank you. For other people wanting to get started with this sort of space, like they've got a use case where they want to do fraud detection, is there a top recommendation you would give to folks that are just kind of starting their fraud detection journey?

Tim Eastridge: Sure. Yeah. GraphAcademy is great, offered by Neo4j. A lot of videos there to help get started. I actually wrote the book, like Jennifer mentioned, for this type of use case. So it's called Graph Data Science with Python and Neo4j, and I worked with a publisher called AVA.

And the reason I wrote that book was for data analysts or data scientists like myself that are already working with a lot of data, and have maybe been introduced to the concept of knowledge graphs and graph databases, but need to overcome that initial learning curve of, "Okay. How do I get data into the database? What are the different algorithms that are within this database that I don't have to move data into Python to run the algorithms? I can actually run them here myself. How do I visualize it with tools like Bloom?" or we even cover Power BI, because a lot of people are using Power BI for analytics these days. So you can create those graph visuals using Power BI. So that book is a great resource. I'm getting good feedback that people are finding that helpful. So I think between those two options, that's a good way to get started.

Jennifer Reif: Yeah. What I've really liked about your book so far is, as a non-data scientist coming in, I feel like it is giving a good introduction to just the concepts overall, and just kind of that soft ramp into the space in general, I think, is really helpful, even just outside of the data science community.

Tim Eastridge: Great. I'm glad you're finding it helpful. I actually have a copy on the shelf in case. It's on Amazon. So this is what it looks like. Graph Data Science with Python and Neo4j. So it's about 250 pages. It comes with data examples. So there's a GitHub link, so you can pull data examples and just follow along with the book, copy and paste code to see it run and execute. That's how I've learned a lot through my data science journey, is just copying code to run it and see what happens, and then iterating from there.

Jason Koo: Spoken like a software developer. So when I'm writing blog articles and stuff, in the process of writing, I generally learn something new about the subject that I'm working on. When you were working on this book, did you have that experience as well? Was there something that you learned in the process that you're like, "Oh, wow. That was just..." Something you just really want to share with folks?

Tim Eastridge: Yeah. Absolutely. A couple of the chapters cover large language model integrations with knowledge graphs. So during writing this book, everything's changing, with LLMs and generative AI. So we cover embeddings. We cover synthesizing large documents into more manageable summaries. So that whole process was exciting, and I learned a lot while writing the book myself about how to leverage these tools. And, of course, Neo4j has been instrumental in leading that charge with taking large language models and integrating them with graph databases. So I'm just grateful for the whole community as we're all learning together.

Jason Koo: Well, thank you for your contributions. No, this is great. I know this book didn't come out that long ago, but are you already considering another book?

Tim Eastridge: That's a good question. Not immediately, I'll say. It does take quite a bit of work, but maybe in a couple of years, once things have matured a little bit with large language models and graph integration. Maybe writing a sequel to the book, if you will, to go from more of the intermediate to more of the advanced.

Jason Koo: Right. Well, with the speed that things are developing, that might happen in maybe three to six months.

Tim Eastridge: Yes. Absolutely.

Jennifer Reif: I was wondering if you would tell us a little bit about your work on the knowledge graphs and private equity space, of course, what's comfortable sharing, and all of that.

Tim Eastridge: Sure. Yeah. So we worked with a large client in the private equity space, and they were looking to do deal sourcing and try to find smaller companies to acquire. So the way that graph analysis helped enrich that deal-sourcing process is, it could find complex patterns in the data based off of which limited partners and general partners. So there's two terms here that I'll be using.

Limited partners are basically the private equity companies, and then general partners are the ones making those funds that the private equity will invest in. And so, looking at that complex web of commitments and funding can help understand in a new way the dynamics of the investment landscape. And using algorithms like PageRank, you can figure out who are the most influential entities within this complex graph.

And so, that was a really helpful metric in trying to understand who the salespeople... in part of the deal-sourcing pipeline, because you can start with the most influential people first, and some of the names would be familiar in industry. You'd see the PageRank scores would match up with what you would expect, but it adds a little bit more science and metrics to that intuition, which is data science in general. Right?

Jennifer Reif: Just little extra validation with the algorithms.

Tim Eastridge: Yeah. And it ends up being a recommendation engine because you can sort the highest-priority leads first. And looking at those complex interactions helps you not only understand the landscape, but also looking at the time series element and understanding when those commitments occurred. So what often happens in that private equity space is, when one large institutional investor makes a commitment to a certain fund, then a big wave of smaller investors will come invest as well. And so, that's where that PageRank comes into play, because you're looking at the commitments over time and understanding how all those interactions play out.

Jennifer Reif: Right. Kind of like that, I guess, popularity effect, when you have a celebrity or a star go do something, and then everybody else follows suit. Now you kind of know, "Okay. This is a celebrity that people are looking up to, and this is where the trend is going, because this high-profile person made this decision."

Tim Eastridge: Yeah. Yeah. Absolutely.

Jennifer Reif: Okay. Were there any gotchas or tips with working with that type of space or those problems?

Tim Eastridge: So one thing we found really helpful in that project was in the LangChain. So LangChain is a framework to help use large language models for various tasks. Right? So LangChain has this very helpful RDF triple extractor that is helpful for graph integration. So RDF is basically, you have a subject, predicate, object. And so, it creates... If you have a sentence, it will separate that sentence into a semi-structured format to kind of help understand the text.

And the LangChain would automatically create these RDF triples for us, and we found that to be really helpful, because then you can create a graph from this unstructured data. I don't know if that's directly the question you're asking about gotcha, but I think that was more of an aha moment where we're like, "Oh, this is really helpful and integrates well with our graph project."

Jennifer Reif: Okay. That's cool. Are there resources you would recommend for people wanting to explore this type of scenario or maybe just the general knowledge graph kind of structure in general?

Tim Eastridge: I mean, beyond the two we already covered, I'm trying to think. I read Tomaz's book. I thought that was a very helpful book as well. Do you recall the title of that? Was it Graph Algorithms? Or I'll have to go back and check, but-

Jennifer Reif: I'd have to look it up too. We can figure that out. We'll drop a link in the show notes.

Tim Eastridge: Okay. I was trying to remember that one earlier. That was also a very helpful book, and I think he covers the RDF triples in his book, if I recall correctly.

Jennifer Reif: Okay.

Jason Koo: Yeah. It is Graph Algorithms for Data Science. Yeah.

Tim Eastridge: Okay. Perfect.

Jennifer Reif: Okay.

Jason Koo: The memory is good. Oh, With examples in Neo4j. So also available on Amazon.

Jennifer Reif: Okay. Cool. Do you feel that there are specific problems developers are trying to solve or things they're having trouble with, either just in the graph space or with GenAI, or knowledge graphs, or any of that domain?

Tim Eastridge: Yeah. I saw Gartner Research put out top technologies for 2024 a couple of times this year, and smack in the middle for 2024 is generative AI, which is no surprise. But then right next to that is knowledge graphs, which is listed as a key enabler. And I think that's really important, because there's a nice synergy between generative AI and knowledge graphs where, in a way, generative AI is creating more data.

We were already having trouble with so much data, and now we have even more data. So having a flexible storage mechanism to harness the power of these large language models as they interact with our proprietary data, and then having a flexible way to store that, interpret it, using the large language model to query the graph database, things like GraphRAG. Right? So yeah, it's just a really exciting time to have these kind of integrations.

Jennifer Reif: As far as your clients and things, are there specific things that companies are trying to build, or maybe newer problems or new twists on problems that you've seen crop up, especially this year?

Tim Eastridge: Yeah. I've encountered several startup companies that are working on these chatbot-type interactions. And chatbots have always been a challenge, because just the nature of intelligence in general. Right? How do we have the system reply with intelligent answers? So a few clients have been asking to not only have correct answers replied back to you, but also have this larger context window.

So one of the challenges we're wrestling with is, how do we store these contexts in a time element within Neo4j? Right? Because in a smaller proof-of-concept arena, it's relatively simple if it's just one conversation. But if you're trying to learn across multiple users, multiple sessions, it quickly becomes complicated, and that orchestration layer becomes complicated. So on the one hand, large language models solves a lot of our problems, and then on the other hand, it creates a bunch of new problems.

Jennifer Reif: Yeah. We've had a few conversations about that here and there, just amongst advocates and so on, that all these new technology spaces crop up every so often, these big kind of waves of new innovation and such. And it always solves a bunch of things, but on some level, sometimes the problems are still the same. They're just in new and inventive ways, right?

Tim Eastridge: Absolutely. Absolutely. And I actually had a conversation earlier today about the fear of, as we're having more automation, what do we do with people's free time? And so, I had a lengthy conversation about just general, how AI is going to interact with society as a whole. So one of the kind of complicated questions where there are no correct answers, but important to kind of keep top of mind as we think about human-centric organizations. How do we keep the purpose of why we are operating and coming to work every day front of mind?

So at Eastridge Analytics, we think about integrity constantly. Right? How do we stay... Integrity meaning multiple things. Obviously, the truthful, honesty aspect, but also this wholeness and completeness aspect. And so, as we're building AI applications, how do we build it in a human-centric, integrated, and in a way that has integrity as a whole?

Jason Koo: So I love that. Right? I love that you're trying to keep humans in the loop, because we're an important part of the process. So what advice would you have for other companies who are struggling with the same thing, like, "Okay. We really want to use AI technologies," and how do we enhance the people that we have versus replacing people? Right? Because we hear a lot of layoffs, and a lot of the reasoning for layoffs is, "Oh, because we're going to do some sort of AI initiative." What would you say to these companies?

Tim Eastridge: So in my opinion, knowledge and teaching is really important for humans as technology continues to grow. We, of course, want automation. We want things like this, but we also need to teach and make sure that our employees and contractors are learning as we go. And so, I think that's where some of the learning materials we talked about earlier come into play, so that they can get up to speed with how these things are working under the hood, and I think that learning mechanism is really important.

And then it also ties in well with knowledge graphs. So as we're getting knowledge about how knowledge graphs work, I think these systems open the door for learning in new ways. And so, I'll be talking at the NODES conference in November, and the topic of that conversation is going to be a new look on the Pentagon Papers.

And so, what I'm going to try to accomplish with that conversation is to use generative AI and knowledge graphs to explore a really important artifact of history, which were these Pentagon Papers published in 1971, and just have a fresh perspective on this body of text. Right? And so, I think keeping humans in the loop, human-centric. How do we use this technology to improve ourselves, I guess, is a shorter version of what I was trying to say there.

Jason Koo: Could you expand on the Pentagon Papers? Right? I mean, you mentioned that it was early '70s, but I think it's probably fallen out of most people's memory, including mine.

Tim Eastridge: Absolutely. Yes. Yes. Yes. Pentagon Papers were confidential, top secret documents related to the Vietnam War. And so, Daniel Ellsberg leaked the information. It was 7,000 pages of top secret documents that were part of the McNamara research study, and they were researching the status of the war, and the research that the government was doing directly contradicted what the government was saying to Congress and to the people of the United States.

So it was a really important leak, because it showed that a lot of people at the time were turning anti-Vietnam War in 1971, and this leak confirmed their suspicions that the government was misleading and actually, in some cases, lying about what they were reporting about the war. Yeah. But it's 7,000 pages. And so, I'm hoping that we can use the generative AI to summarize some of these and find those key elements that are disparities between what was told to the public and what was told to the White House.

Jason Koo: Nice. So I'm assuming you're ingesting all this data into a knowledge graph about the papers. Are you also including news reports from the same era to support or bolster or compare what's happening in the papers versus what's happening or known to the public at the time?

Tim Eastridge: Yes. Yeah. Exactly. I want to try to throw in as much text information as I can, and then kind of let the AI help us come up with a new analysis, if you will, a new age analysis of this important paper that was published a long time ago. And obviously, it's going to be helpful using the same techniques for current information, and as new things come out, how can we analyze this information for ourselves and have a fresh look on it, grow our own knowledge base? So that's the goal.

Jennifer Reif: And then, like you said, use AI as a tool to improve our knowledge and decision-making, not necessarily as a replacement to make decisions or do things necessarily for us, but helping us make those higher level and just kind of keep moving up in the knowledge arenas.

Tim Eastridge: Yes. Exactly. Exactly. We heard from the media that these Pentagon Papers directly contradict what was told to the public. And that's what we want to show in this demo, is, using these new tools, we can find that out for ourselves. We don't have to rely on the media telling us what the 7,000 pages say, and we also don't have to read all 7,000 pages ourselves. We can kind of have this in-between where we can do some of the analysis ourselves, but also rely on the AI to help us with that.

Jason Koo: Sounds like a super meaty project. It sounds like enough to fill a book as well. Yeah. I'm looking forward to that talk and the learnings. The demo and the code, will you make that public, or the dataset public... by the end of this journey for you?

Tim Eastridge: Yeah. Yeah. Absolutely. I'm still in the process of creating the database, obviously. I still have a couple months, but I will publish that open source. There's not currently a dataset, definitely not a graph dataset of these papers that I've found. And I went on Kaggle and tried to find if there was any NLP work done to these documents, and wasn't able to find anything. So I've actually just downloaded the raw data from the National Archives, and I will publish what I've come up with.

Jason Koo: Well, that's great. No, thank you. That's quite a project.

Tim Eastridge: Yeah.

Jennifer Reif: Yeah.

Tim Eastridge: [inaudible 00:23:17] be interesting.

Jennifer Reif: We're really excited to see the output of it, I guess, at NODES. So if you are interested in attending the event, for you listening or watching, we'll have a link in the show notes to NODES 2024, where you can see talks like Tim's on the Pentagon Papers or many, many others too about graphs and generative AI and knowledge graphs and all related topics. Are there current things that you're working on that you'd like to share or just kind of give insight? I know, obviously, prep for the NODES talk.

Tim Eastridge: Yeah. Sure. So one other project Eastridge Analytics is working on is using generative AI and knowledge graphs to explore U.S. patent data in a new way. So patents are, of course, used to defend intellectual property. If you come up with a patent, you can ask other companies to license that idea from you if they want to build this idea that you came up with. Let's say AirPods, for example, right? "Oh, I want to come up with a Bluetooth wireless speaker and chat." Right?

Then if you come up with that idea first, you can patent it. And then when Apple comes along and tries to make this, you say, "Yes, you can still make it, but you need to pay me a percentage in order to make it." So there's that licensing aspect to it. And licensing has historically been really tricky in the patent space, because, in large part, the legal jargon of these patents is really complicated.

If anybody has ever tried to read a patent, what exactly are we talking about here, and what is covered by this patent? So that's one thing that Eastridge Analytics is working on, is we have a proprietary dataset of all patents going back to 1960s with the U.S. Patent and Trademark Office, and we look at simplifying what the patent actually means as it is in layman's terms, like, "Okay. This is what this company actually patented."

There's the citations analysis that makes a network of this patent cited this patent, and there's a node centrality component to it. So what is the most important patent within this space of Bluetooth headphones? Another thing that we're doing is... Let's see. We're doing several things. So the licensing aspect of it. We want to help companies be able to monetize their intellectual property.

We're proactively using AI to help companies and technology transfer offices at universities to look for revenue opportunities from their existing IP. So they have this patent on a AirPod, and we can give them 10 companies that they can reach out to seek licensing opportunities. Yeah. That's another project that we're currently working on.

Jennifer Reif: That sounds like a really intriguing knowledge graph that you could use for several different use cases, actually.

Tim Eastridge: Yeah. We're excited about it, and we pull in... Each week, the USPTO will publish new patents. We're actively pulling in new patents each week, and then we could do automated alerting so that if a patent cites an older patent, we can notify the original patent holder and say, "Hey, this company cited your patent. We might want to take a look at it for potential licensing opportunities since you had the original idea. Maybe they tweak it a little bit, but they might still need to pay you a small licensing fee in order to create the new concept."

Jason Koo: No, that's awesome. That sounds like a very complicated space to master if you didn't have a knowledge graph, because it sounded like you also have not just all the patents in the knowledge graph, but also emerging companies or companies that exist with certain technologies. It sounds like a knowledge graph that encompasses two different domain spaces so that you can find these connections.

So how do you keep the database up-to-date with companies that are in the market? So you got the... The Patent Office then releases information that you're able to ingest, but finding information on publicly available companies seems like a more difficult ingestion task.

Tim Eastridge: Yeah. Yeah. Absolutely. It's a hybrid of manual search and web scraping. And because it is more complicated, like you said, we prefer to look for opportunities within the patent database itself, and try to offer licensing opportunities based off of this contained environment, because once we do that extra step of going out to the web, then you have a lot of false positives, a lot of nonsense that gets pulled in from the AI.

So there has to be that kind of human-in-the-loop element, and I think that's more on the road map of where we're heading. We're still trying to get traction, finding customers and all that, to build out that next phase of the project.

Jason Koo: Okay. One last technical question about that. Just as you were talking about it, I was like, "Oh, there's actually at least two architectural patterns for this that could work." Right? One is to have one database that contains both the patent information and external company data, or you separate the two and then you manage across. Have you decided or set up which type of pattern works best?

Tim Eastridge: Yeah. We have it all just within the one database right now. We could do that Fabric approach as phase two. But just to keep things more simple, we do just have it within the one database for now.

Jason Koo: Yeah. No, that makes sense. And for those listening, Fabric is a Neo4j product for managing multiple databases.

Jennifer Reif: Like a federated data. Yeah. So horizontal scaling.

Jason Koo: Yeah. Tim, have you used Fabric previously?

Tim Eastridge: I have a little bit. At Bank of America, we had Fabric set up. And so, we were able to query across different databases. My preference is just to keep the information we need within one database to kind of keep things simple as much as possible, but it is nice to have that flexibility to query from a different data store if needed.

Jason Koo: Right. Cool. Thank you.

Jennifer Reif: Okay. Do we want to jump into our tools of the month and talk about what we've been using the last month and what we're really excited about?

Jason Koo: Yeah. Let's do it.

Tim Eastridge: Yeah. Sounds great.

Jennifer Reif: I'm happy to go first, actually. So lately, I have been playing with VS Code. I've been doing just a lot in VS Code in general. I have some various IDEs and text tools and things that I use, but I keep kind of honing in on VS Code for a variety of different things. I do content writing, managing GitHub, and branches, and versions, and merging. And actually, I really kind of like their merging capabilities. They highlight things really nicely and make it easy to merge different branches and things into one another.

So that's kind of nice. So just a little bit of everything. I write AsciiDoc in it. I write code in it. I manage GitHub projects in it. So if you haven't tried out VS Code, I would definitely recommend that. And there is even a Neo4j plug-in, so you can manage your Neo4j database within VS Code itself. So check that out.

Jason Koo: And I will also second that VS Code is a great IDE. It's my primary IDE as well. But my tool of the month is Cursor AI, which is actually a... It's an AI IDE, and it is extremely similar to VS Code. The layout is quite similar. It can use the same plug-ins, and it starts off with the same hot keys. The biggest difference is the sort of chat interface with an LLM. The AI for helping you do code completions and suggesting code and kind of asking questions about code is built in.

You can get the same effect with Visual Studio Code with some AI plug-ins like Cody or Copilot, but it's a little more seamless with Cursor AI. Now, Cursor, I think, has three payment plans. There's a free one, so definitely download or give that one a go. But very quickly, you will kind of start running out of completions and whatnot.

So I think the Pro plan, which is just right in the middle, that one gives you, I believe, unlimited completions, but limited interfaces with the underlying LLM, which, I believe, is either Claude or OpenAI, and I think you have options to connect to both. So anyways, so that's my tool of the month.

Jennifer Reif: I actually didn't realize that Cursor AI was an IDE. I assumed it was just a tool you would use.

Jason Koo: Oh, like Word?

Jennifer Reif: Almost like a ChatGPT type of thing, I guess. I didn't realize it was a separate IDE in and of itself. That's cool.

Jason Koo: Yeah. Completely separate. But yeah, they were definitely inspired by Visual Studio Code. If you are using VS Code and you jump over to Cursor, you won't have any problem. I guess my first hiccup or difference I've had with Cursor is, I guess maybe there's a setting for autosave. But I usually edit code in VS Code and it autosaves, and then I can just run stuff. Cursor, I have to usually press Save. But now that I think about it, it's probably just a setting.

But if you're having the LLM where you're editing the code for a while, and then say you like, "Okay. This isn't working," you want to stash it or you want to revert to another commit, that part was a little bit trickier. Right? I'll do a stash, and the code sits in Cursor. So it doesn't go away like in Visual Studio Code, which would automatically update to your original code. So I haven't found quite the best way around that other than to basically kind of close that window and then force it to refresh and open. I'm sure there's a better way to do that, but that was my one hiccup at the moment.

Tim Eastridge: Yeah. I'm excited to check that out. That sounds like a really great tool. But having spent a couple hours yesterday, copying and pasting code from Claude to... I was building a Streamlit app for a proof of concept for the client. And so, Claude would make a change, and I'm just moving it back and forth. So using that tool would probably save me a lot of time.

Jason Koo: Yeah, because you can request the edit right in the IDE, and it will, in line, edit, and then you can choose to accept or not.

Tim Eastridge: Oh. I definitely will check that out.

Jason Koo: Nice. Yeah. Tim, what's your tool of the month?

Tim Eastridge: Let's see. So LLM Graph Builder, a new tool that Neo4j is actively building. And I tried it out for the first time a week or so ago, and I'm actually going to use it for the NODES talk in November for the Pentagon Papers use case. But it just seems really intuitive, easy to use. You throw in your text documents, and it will automatically parse out the important elements and create a knowledge graph for you, and then it has a built-in GraphRAG element where you can chat with your data right there all in the same portal. So I'm really excited to use that tool more as I work on the NODES conference talk.

Jason Koo: Nice. I think there was a recent update. So if you're running locally, I think you can use Ollama now for local models, if I recall. I'll have to double-check, but I'm pretty sure I saw that come across the wire.

Jennifer Reif: Nice. That would be awesome.

Tim Eastridge: Yeah. That would be great. I think I was using GPT-3.5 Turbo for the work I was doing last week.

Jason Koo: Jen, do we want to talk about the events?

Jennifer Reif: Yeah. Sounds good to me.

Jason Koo: Let's do that.

Jennifer Reif: Do you want to go first? Because I think, actually, your events come up first.

Jason Koo: Oh, yes. Yeah. Yeah. By the time we publish this. So next week, between September 9th through 12th, there's quite a few events that Neo4j will be at in San Francisco. So there's The AI Conference, which we're sponsoring. And then right before that... So The AI Conference is September 10th through 11th. September 9th, there's a preconference event that we're doing with Weaviate and quite a few other companies.

Then the night of The AI Conference, we're also doing a hack night with Weaviate and Diffbot, and Teleport, and a few other companies. And then the 12th, which is afterwards... Neo4j, we've actually got two events. Right? One is with Alexey over at GitHub, and I will be going to AI Night with Zoom and Zilliz over in San Jose. So those are my events for September.

Jennifer Reif: Sounds like a really busy week really for September.

Jason Koo: Yes.

Jennifer Reif: Everything's jammed into one week for you.

Jason Koo: Yeah. I will probably be sleeping all through the weekend next week.

Jennifer Reif: Mine are just after that, actually. So I have JConf in Dallas the third week of September. So the 24th through the 26th, I'll be there. And then the following week, so the very last week of September, I'll be in Colorado for dev2next. Both of these events are kind of more Java-focused, but I'll be present and happy to chat or catch up with anybody that happens to be in the area or attending those events as well. So that's for me. How about you, Tim?

Tim Eastridge: So I'm in Charlotte, North Carolina. We have a data science meetup group here. We'll meet at the end of October, and we're going to have a keynote speaker from Neo4j speak at that event. So if you're interested to learn more about Neo4j, and connect with data scientists here in the Charlotte area, please come out and connect with us. It'd be great to have you.

Jennifer Reif: Or, of course, meet with you and ask you about your NODES project or your book, or your solutions that you're building. Yeah.

Tim Eastridge: Yeah. Yeah. Absolutely. Yeah. Absolutely.

Jennifer Reif: Great. So I think that's pretty much all we had planned for today. Tim, we really appreciate you coming on and chatting about-

Jason Koo: Yeah. Thank you.

Jennifer Reif: ... your work and your introduction or your progress with graphs and data science, and the work that you're doing in the community as well to help educate and encourage people and their journeys as well.

Tim Eastridge: Of course. Well, thanks for having me. It's been great.

Jason Koo: Cool. Thank you, everybody. See you next time.

Jennifer Reif: Yup. Bye.

Tim Eastridge: Bye.