Skip to content

Commit

Permalink
fix: typos according to the issue 01-introduction.md Docs Feedback #40
Browse files Browse the repository at this point in the history
  • Loading branch information
glaucia86 authored and sinedied committed Apr 12, 2024
1 parent d5a0338 commit 1b71811
Showing 1 changed file with 28 additions and 24 deletions.
52 changes: 28 additions & 24 deletions docs/tutorial/01-introduction.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,21 +6,30 @@ This tutorial will guide you through creating a serverless a ChatGPT and RAG (Re

The chatbot will be able to answer questions based on a set of enterprise documents uploaded from a fictional company called _Contoso Real Estate_.

The final result of this application can be seen in the following picture:
Here's an example of the application in action:

![ChatGPT with RAG](../../docs/images/demo.gif)

Our goal is to provide you with a hands-on experience building a serverless application using Azure Services and LangChain.js. We will guide you explaining each step of the process, from setting up the environment to deploying the application.
The goal of the tutorial is to provide you with a hands-on experience building a serverless application using Azure Services and LangChain.js. You'll be guided through each step of the process from setting up the environment to deploying the application.

The FrontEnd side of the application will already be provided, so you can focus on the Backend side of the application.
The frontend of the application is provided so that you can focus on the backend code and technologies.

## Prerequisites

You can follow this tutorial in two ways: locally or using Codespaces.
You can follow tutorial and run the application using one of the following options:

- Run the application locally on your machine
- Run the application using Codespaces.

> It is highly recommended to use Codespaces for this tutorial. Codespaces is a cloud-based tool that enables you to run development environments without installing any tools on your computer. This way, you can focus on the development process without worrying about the environment setup.
### Locally
### Run using Codespaces

If you decide to continue using **[Codespaces](https://github.com/features/codespaces)**, you can follow the steps described in the README.md file at the root of the project.

> **Note**: If you are using Codespaces, you don't need to install any of the prerequisites mentioned above. Codespaces already has all the necessary tools installed. Codespaces can be used for free for up to 60 hours per month, and this is renewed every month.
### Run Locally

If you choose to use a local environment, you will need to install:

Expand All @@ -33,59 +42,54 @@ If you choose to use a local environment, you will need to install:

> If you're a Windows user, you'll need to install [PowerShell](https://learn.microsoft.com/powershell/scripting/install/installing-powershell-on-windows?view=powershell-7.4), [Git Bash](https://git-scm.com/downloads) or [WSL2](https://learn.microsoft.com/windows/wsl/install) to run the bash commands.
### Codespaces

If you decide to continue using **[Codespaces](https://github.com/features/codespaces)**, you can follow the steps described in the `README.md` file at the root of the project.

> **Note:** If you are using Codespaces, you don't need to install any of the prerequisites mentioned above. Codespaces already has all the necessary tools installed. Codespaces can be used for free for up to 60 hours per month, and this is renewed every month.
## Project Overview

Building AI applications can be complex and time-consuming, but using LangChain.js and Azure serverless technologies allows to greatly simplify the process. This application is a chatbot that uses a set of enterprise documents to generate responses to user queries.
Building AI applications can be complex and time-consuming. By using LangChain.js and Azure serverless technologies, you can greatly simplify the process. This application is a chatbot that uses a set of enterprise documents to generate AI responses to user queries.

We provide sample data to make this sample ready to try, but feel free to replace it with your own. We use a fictitious company called _Contoso Real Estate_, and the experience allows its customers to ask support questions about the usage of its products. The sample data includes a set of documents that describes its terms of service, privacy policy and a support guide.
The code sample includes sample data to make trying the application quick and easy, but feel free to replace it with your own. You'll use a fictitious company called Contoso Real Estate, and the experience allows its customers to ask support questions about the usage of the company's products. The sample data includes a set of documents that describes the company's terms of service, privacy policy, and support guide.

## Understanding the project architecture

Understanding the development process is extremely important. The project's architecture should be clear and well-defined. The architecture of the project is shown in the following diagram:
The architecture of the project is shown in the following diagram:

![ChatGPT with RAG](../../docs/images/architecture.drawio.png)

To understand the architecture of the project, let's break it down into its individual components:

1. **Web App:**

- The user interface for the chatbot is a web application built with **[Lit](https://lit.dev/)** (a library for building web components) and hosted on Azure Static Web Apps. It presents a chat interface for users to interact with.
- The code for this component is located in the `packages/webapp` folder.
- The user interface for the chatbot is a web application built with **[Lit](https://lit.dev/)** (a library for building web components) and hosted using **[Azure Static Web Apps](https://learn.microsoft.com/azure/static-web-apps/overview)**. It renders a chat interface for users to interact with and ask questions.
- The code is located in the `packages/webapp` folder.

2. **Serverless API:**

- When a user submits a query through the web app, it is sent via HTTP to an API buitl with Azure Functions.
- This API uses LangChain.js to process the query.
- When a user submits a query through the web app, it is sent via HTTP to an API built using Azure Functions.
- The API uses LangChain.js to process the query.
- The API handles the logic of ingesting enterprise documents and generating responses to the chat queries.
- The code for this component we will create later in this tutorial. Which will be located in the `packages/api` folder.
- The code for this functionality will be shown later in the tutorial and is located in the `packages/api` folder.

3. **Database:**

- Text extracted from the documents and the vectors generated by LangChain.js are stored in Azure Cosmos DB for MongoDB vCore
- Text extracted from the documents and the vectors generated by LangChain.js is stored in Azure Cosmos DB for MongoDB vCore.
- The database allows for the storage and retrieval of text chunks using vector search, which enables quick and relevant responses based on the user's queries.

4. **File Storage:**

- The source documents such as terms of service, privacy policy and support guides for the fictional company Contoso Real Estate are stored in Azure Blob Storage. This is where the PDF documents are uploaded and from where they can be retrieved.
- The source documents such as terms of service, privacy policy, and support guides for the Contoso Real Estate are stored in Azure Blob Storage. This is where the PDF documents are uploaded and retrieved from.

5. **Azure OpenAI Service:**
- This service is where the AI Model, capable of understanding and generating natural language is hosted. This is used to embed text chunks or generate answers based on the vector search from the database.

Now let's examine the application flow based on the application architecture image:
- This service is where the AI Model (a Large Language Model or LLM) is hosted. The model is capable of understanding and generating natural language. This is used to embed text chunks or generate answers based on the vector search from the database.

Let's examine the application flow based on the architecture diagram:

- A user interacts with the chat interface in the web app
- The web app sends the user's query to the Serverless API via HTTP calls
- The Serverless API interacts with Azure OpenAI Service to generate a response, using the data from Azure Cosmos DB for MongoDB vCore.
- If there's a need to reference the original documents, Azure Blob Storage is used to retrieve the PDF documents.
- The generated response is then sent back to the web app and displayed to the user.

The architecture is based on the RAG (Retrieval-Augmented Generation) standard. This technique combines the ability to retrieve information from a database with the ability to generate text from a language model. We will explain this pattern further throughout the tutorial.
The architecture is based on the RAG (Retrieval-Augmented Generation) architecture. This architecture combines the ability to retrieve information from a database with the ability to generate text from a language model. You'll learn more about RAG later in the tutorial.

## Executing the Project

Expand Down

0 comments on commit 1b71811

Please sign in to comment.