The new wave of interest in large language models (LLMs) brought vector databases to the forefront of technologies for AI applications. We'll clearly need accurate and performant query results from vector databases to meet an ever-growing demand for data and sophisticated applications, which is where GPUs offer their best. We'll use a GPU-native vector database to introduce the concept of vector search, and work through use cases of vector search in practice. Specifically, we'll use the vector database to build two AI applications that rely vector search: a document classification application and a LLM digital assistant chatbot. Prerequisite(s):