freeCodeCamp.org
In this video tutorial, Tom Chant offers an interactive, project-based course for beginners looking to build AI-powered applications with the Chat GPT, DALL-E, and GPT-4 APIs. The course covers prompt engineering, building chat bots, fine-tuning models, and using GPT-4, and comes with numerous challenges that encourage the editing of code and even running projects in one's browser. Chant guides learners through setting up the OpenAI API and obtaining an API key, and demonstrates how to use fetch requests using JavaScript to produce different outputs from the API. The importance of tokens is emphasized, and learners are challenged with refactoring prompts in order to personalize AI's responses to specific inputs.
In this section, we are introduced to a project-based course by Tom Chant, which teaches how to build AI-powered applications with the chat GPT, DALL-E, and GPT-4 APIs. The course is designed for beginners who have basic knowledge of vanilla JavaScript and a curious mind. The interactive course comes with a ton of challenges, and it allows the users to edit the code, run the projects locally or right in their browser. The course covers various topics like prompt engineering, building chat bots, and fine-tuning a model on our own data. Additionally, the course introduces the GPT-4 model and how to make use of it, although presently, there is a waiting list to get API access.
In this section, the course introduces the movie pitch project, which allows the user to input a single sentence movie idea and generate a complete movie pitch using OpenAI's language model. The course covers working with models, crafting and tweaking prompts to get desired results, training the model with examples, and generating images with words. The course also comes with a warning about the visibility of the OpenAI API key on the front-end, advising users to be mindful not to compromise their API key while developing or sharing their project. The HTML and CSS code for the movie pitch project is reviewed, and the JavaScript code controlling the text area, input container, and movie boss text is introduced.
In this section, the video tutorial focuses on setting up the OpenAI API and obtaining an API key. The tutorial demonstrates how to access the OpenAI homepage to sign up, confirm the sign-up through the phone number, and view the generated API key. Additionally, the tutorial provides insights on how to view credit and explains the pay-as-you-go model on the OpenAI website. Once the API is set up, the tutorial talks about the completions endpoint in the OpenAI docs and explains what a "completion" means in the context of the OpenAI API. The tutorial also highlights the significant information and code examples given in node.js, PHP, and curl, respectively.
In this section, the instructor demonstrates how to make API requests to OpenAI using JavaScript. He starts by obtaining the endpoint information from the API documentation and creating a bare-bones fetch request. Then, he discusses the OpenAI request body, which consists of a model and a prompt, and provides an explanation of prompts. After that, he shows how to modify the request to include a simple prompt and how to log the response. Having confirmed that the API connection is working, he changes the prompt to something more emotional and checks that the response produced is also emotional.
In this section, the instructor walks the learners through a challenge to make a fetch request to the OpenAI API to get a generic enthusiastic response with a prompt that requests a response of no more than five words. The learners are expected to set up a fetch request, headers, and a body that contains a model and a prompt and then log out the data from the response to check if the syntax is right and a completion is being obtained from OpenAI. The instructor emphasizes not copying and pasting code but rather doing the challenge independently to build muscle memory and fluency. The next step is to display the completion inside the speech bubble.
In this section of the video, the instructor discusses AI models and their purposes. An AI model is an algorithm that recognizes patterns and makes predictions or decisions based on training data. OpenAI offers various models, including GPT-3, GPT-3.5, and GPT-4 for natural language understanding and generation, Codex for generating computer code, and a model for filtering content. These models differ in age, complexity, speed, and cost. The newest model, text DaVinci 003 that was utilized in this course, is capable of providing long text output, while the older models are cheaper and faster but provide shorter outputs. As per OpenAI's recommendation, one should start with the best available model and then downgrade to save costs and time where feasible.
In this section, the instructor introduces two useful tools: gpttools.com and the OpenAI Playground. The first tool, gpttools.com, is particularly useful for model selection. Using this tool, the instructor compares two API calls side-by-side using different models to get a better understanding of their performance and cost. The second tool, the OpenAI Playground, allows users to practice with different models by selecting pre-written examples or writing their own prompts. The tool then highlights the results in green and provides a code snippet in Node.js, Python, or curl, allowing users to easily integrate the code into their own projects. Using the code snippet from the OpenAI Playground, the instructor explains how to switch from using a fetch request to the OpenAI dependency, allowing for neater code and less work in the future.
In this section, the instructor sets up a new file called "mv.js" that will hold the API key. The API key is placed within an environment variable to mimic what we would do if this were a production app. They import the process and path to access the environment variable, and then they check if the key is working by importing it to a new const called "API key." Next, they install the openai-dependency and import two constructors from it: configuration and openai API. The first step is setting up a new instance of the configuration constructor.
In this section, the instructor updates the code by creating a new instance of configuration and OpenAI API to avoid unnecessary code usage. They also make changes to the fetch bot reply function by keeping only the model and prompt and logging out the response. However, as the await keyword is used, the function must be declared as async. The instructor then removes the white space from the beginning and end of a string using the trim method and shows how the project can be set up in VS Code with OpenAI dependency.
In this section, the instructor guides the viewer on how to download and set up the project folder with its dependencies to run the app. They also demonstrate how to personalize the AI's response to the user's input by utilizing the parameter outline and refactoring the prompt. Lastly, they challenge the viewer to refactor the prompt to make the AI give an enthusiastic, personalized response to the user's input and explain a new line of code added to the project.
In this section, the instructor discusses the importance of the "max tokens" property in controlling the length of text generated by OpenAI. The concept of "tokens" is introduced, which refers to the chunks of text that OpenAI processes in generating its output, with each token being about 75% of a word. The instructor explains that if the max token limit is not set, the completion will be much shorter and not make sense, as was seen in the previous example. Therefore, it's essential to set the max token limit to ensure that the completion is long enough for the needs of the project.
In this section, the instructor discusses the importance of tokens and how they can impact costs and performance. The max tokens parameter is not useful for controlling the verbosity of OpenAI's response, but it can be limited to keep costs down. Good prompt design is the best way to ensure the desired length of text is returned from OpenAI. The instructor then sets up a new function called "fetch synopsis" to generate a professional synopsis from the user's one-sentence movie idea and displays it in the output container. The prompt for this API call should be specific and detailed to avoid imprecise descriptions.
In this section, the instructor highlights the importance of prompt design in controlling the length and quality of the AI-generated output. While prompt design can involve requesting a particular word count, it can be imprecise and not always result in the desired length. Using the example of generating a movie synopsis, the instructor demonstrates how including examples in the prompt can help OpenAI better understand the desired outcome and produce more accurate and concise results. The instructor also warns against the limitations of the zero-shot approach, which may produce off-topic results or incorrect formatting for more complex requests.
In this section of the video, the instructor discusses how to use the Advertify app with OpenAI's language models to generate advertising copy. He explains that giving the AI a word count can be unreliable, but using a few-shot approach, where examples are provided to help the AI understand what is required, is more effective. By separating the instructions and context using hash symbols, the AI can distinguish between different sections of the prompt. The examples provided allow the AI to complete the given prompt in a similar style, tailored to the user's input. The result is a body of advertising text that meets the desired length, without an ordered list of incomplete sentences.
In this section, the instructor talks about using the few shot approach to generate copy for a new product called Solar Swim, which is a swimming costume with solar cells to charge your phone while on the beach. The instructor provides two examples using the same format, separated by triple hashtags, and explains that adding too many examples can have diminishing returns. After successfully generating copy for the product, the instructor challenges viewers to use the same approach to improve the synopsis fetch request, providing an example synopsis for them to use if needed.
In this section, the instructor discusses how to generate an engaging movie synopsis using OpenAI's GPT-3. He shows how to create a prompt with an example outline and synopsis to ensure that the generated synopsis is appropriate and highlights the importance of keeping the prompt concise to avoid burning unnecessary tokens. The instructor also presents a challenge to refactor the prompt to generate a short and enthusiastic response while maintaining the length of the examples reasonably short. Finally, he demonstrates how to add the example outlines and messages and test the prompt.
In this section, the instructor explains the architecture of the app and the choices made in designing it. While there are ways to make the code more reusable and efficient, the focus of the course is on the AI and not on getting bogged down in JavaScript. The instructor acknowledges that refactoring the code at the end of the course is a good idea. The instructor then proceeds to set up a function to generate an iconic movie title from a synopsis using the fetch title function and passes in the synopsis as a parameter. The rest of the task is left to the viewer.
In this section, the instructor discusses how to generate a catchy movie title using the OpenAI API by setting the model property and max tokens. The instructor advises setting the max tokens to cover the desired title length and introduces the temperature property, which controls how often the model outputs a less likely token, giving more creative and varied results. Additionally, the section explains how cultural differences can affect movie titles and the importance of tweaking prompts and giving examples for desired outcomes.
In this section of the course, the instructor explains the concept of temperature in AI language generation models like DALL-E and GPT-4. Lower temperatures generate less creative and more predictable output, making them ideal for factual responses, while higher temperatures generate more variety and creativity. The instructor demonstrates this by adjusting the temperature of their AI app and generating different titles based on the same outline. The instructor also introduces the concept of text extraction using OpenAI and offers a mini-challenge to the viewers to extract stars' names from a synopsis. The instructor then guides the viewers through setting up a function to extract the stars' names using OpenAI and render them in the HTML.
In this section, the course instructor presents a challenge to the users to use OpenAI to extract the names in brackets from a given synopsis. The instructor provides hints and an example to help users achieve the task. Users can check their results by passing a synopsis to the program and seeing the extracted names in a comma-separated list. The instructor also explains how OpenAI's image API can be used to generate images in an application. A simple game is used to demonstrate this, where users input a description of a famous painting, and OpenAI generates an image in response which appears inside a picture frame on the screen. The instructor explains that the create image endpoint is used to call the OpenAI image API and sets up a response using it.
In this section of the video, the instructor explains the properties needed for working with OpenAI's DALL-E image generation API. These properties include the prompt, N for the number of images, size for the image size in pixels, and the response format which can be either a URL or a base 64 encoded PNG image. The instructor also provides tips on how to write effective prompts to get the desired results from the API. Additionally, the instructor demonstrates how to code these properties into the API call and how to use the image response in HTML.
In this section, the instructor demonstrates how to use OpenAI's DALL-E to generate images from image prompts. The instructor inputs a detailed description of the Mona Lisa and generates a depiction of the painting that is strikingly accurate. They explain that accurate image generation requires a detailed and descriptive prompt. However, OpenAI knows a lot about styles, including impressionism, and different lights, shades, and hues. The instructor encourages users to experiment with image prompts. Additionally, the instructor discusses how to use the title and synopsis of a film to automatically generate image prompts using two functions.
In this section, the instructor guides viewers through a challenge to give a short description of an image that could be used to advertise a movie based on title and synopsis, while containing no names and rich visual detail. The instructor provides examples and suggests a temperature of 0.8 for the image prompt to avoid too many strange ideas. After generating a suitable image prompt, the instructor sets up an async function called fetch image URL to fetch the URL using the prompt generated earlier.
In this section of the transcript, the instructor explains how to use the image prompt to generate an image through OpenAI's API and display it on the HTML page. The challenge is to generate a 512 by 512 pixel image with no garbled text. The instructor walks through the process of passing the necessary properties to the OpenAI.createImage function and retrieving the response URL, which is then used as the source for the image element in the HTML. The instructor also mentions the importance of specifying the output format and size, and makes a few tweaks to the CSS for optimal display.
In this section, the tutorial covers the final steps of the Movie Pitch project, including adding a View Pitch button and wiring it up, updating the UX, making some creative changes to the messages, and testing the project. The tutorial also provides a recap of what was covered in the project, including setting up the OpenAI API, using the zero-shot and few-shot approaches, adjusting the max tokens and temperature settings, and accessing different endpoints. The tutorial concludes by congratulating the user on mastering the basics of using the OpenAI API and providing guidance on where to go next.
In this section, the instructor suggests ways to take the previous project to the next level and make it one's own, such as refactoring the JavaScript to avoid repetition of API calls. The instructor also suggests using OpenAI to create a script for a movie and using the Create Image endpoint to create character sketches or tailor the app to a specific genre. The next project involves building a chatbot using GPT 4, the latest OpenAI model at the time of recording, and the instructor provides a link for users to join the waiting list for the GPT 4 API. The instructor outlines what they will cover in the upcoming section, including chatbot syntax, personality, and penalties, as well as storing conversations in a database. The instructor also warns users to keep the API key safe since it is visible on the front end of the code.
In this section, the instructor goes over the code for setting up the chatbot conversation div and listening for user input. The code creates a new element to display user input and moves the dialogue down to show the latest messages. The instructor also provides a render typewriter text function that gives a typewriter effect to the AI's text as it is rendered. The section then moves on to an overview of how the GPT-4 model works with chatbots, explaining that there is a big problem for chatbots called "hallucinations," where the AI makes up a plausible answer when it doesn't know the right answer.
In this section, the instructor discusses the challenges of chatbots with regards to memory and context. To ensure that the conversation flows logically, the model needs to know the conversation's context. Hence, it is necessary to send the entire conversation so far with each API request. The instructor explains the process by which the conversation is stored as an array of objects using special OpenAI syntax to instruct the chatbot how to behave. The conversation array is the single source of truth for all the interactions with OpenAI. The first object in the array is the instruction object with two key-value pairs: role and content, which determine how the AI behaves and responds.
In this section, the instructor sets up the conversation array and instruction object, with a simple initial instruction of "you are a highly knowledgeable assistant that is always happy to help". The user input is then added to the conversation array and stored as an object with a role of 'user' and the content being the user's input. The next step is to call the 'fetch reply' function when the event listener detects a submit, which is an async function that awaits the API call for the create chat completion function. The instructor explores the API reference for this endpoint, which requires an object with two properties, potentially including the prompt, temperature, max tokens, and presence penalty.
In this section, the instructor discusses the use of the new and impressive GPT-4 model, which is expected to make huge improvements over its predecessors. He explains that while the docs mention GPT 3.5 Turbo, GPT-4 is listed under chat completions in the endpoint compatibility table, and so it will work just fine. He also provides a challenge to create an object with a model property of GPT-4 and messages property that holds conversation array, and then asks a question, hits Send, and logs out the response to see if it works. Finally, the instructor explains that the response needs to be used to update the DOM and conversation array, and presents a challenge with instructions to pass the completion to the render typewriter text function to update the DOM and push an object to conversation array with a role of assistant and content of the completion returned from the API.
In this section, the video tutorial covers how to prevent the chatbot from becoming repetitive by changing the frequency penalty and presence penalty settings when making a request to the API. The two settings offer some control over how repetitive the chatbot's output is, with the presence penalty being a number from -2 to 2 that defaults to zero and the frequency penalty being a number greater than or equal to zero that defaults to 1. These settings will help ensure that the chatbot's language sounds natural and that it keeps up with the context of the conversation.
In this section, the instructor explains the concepts of presence penalty and frequency penalty in relation to chatbot conversations. Presence penalty increases the likelihood of the model talking about new topics, whereas frequency penalty decreases the chances of the model repeating the same phrases. The instructor encourages viewers to experiment with different presence penalty settings and prompts to see its effects, although notes that it is a subtle setting. Similarly, viewers are challenged to generate text with high-frequency penalty settings and are provided with a file to paste their completions in for comparison. The instructor notes that frequency penalty can prevent phrases from being overused.
In this section, the speaker discusses the effects of frequency penalty on the language generated by the AI model. They caution against setting the frequency penalty too low or too high, as this can lead to nonsensical or repetitive language. They provide an example of how a high frequency penalty results in strange, ungrammatical sentences that appear almost poetic. The speaker advises making small changes and testing them to find the optimal settings for the model. Finally, they decide to set the frequency penalty to 0.3 for their chatbot app and move on to altering the chatbot's personality.
In this section, the video tutorial shows how to change the personality of the chatbot by updating the content properties value and challenges viewers to try it themselves. The tutorial then explores how to control the chatbot's behavior to suit different needs, including interacting with children or non-native speakers of a language. Next, the video moves on to adding a Google Firebase database to the project to persist the chat and allow users to pick up where they left off even after a refresh or reload. The tutorial provides step-by-step instructions on creating a Firebase account and setting up a Realtime Database, which will store the conversation array and be accessed when making calls to the Open AI API.
In this section, we learn about setting up Firebase database for our chatbot app and importing necessary methods to access the database. We start by creating the Firebase project and getting the database URL. We then use the Firebase dependency and import initialize app, get database, and ref methods. After setting up app settings and database const, we initialize our app and database and create a const called conversation in DB, which is our single source of truth for the conversation with the chatbot. Finally, we make changes to the HTML and CSS to add a clear button using CSS grid.
In this section of the video tutorial, the instructor demonstrates how to push user input to the Firebase real-time database using the push method rather than storing conversation data in the JavaScript array. The instructor then explains why keeping instructions in the database presents issues such as making it more challenging to edit the chatbot's behavior or personality in the future. Instead, they opt to keep instructions in the index.js file and add it to the array sent to OpenAI with each request. Finally, the instructor makes additional changes to the fetch reply function, such as fetching the latest conversation from the database before submitting it to the OpenAI API.
In this section, the instructor begins to prepare for making a request to the OpenAI API by checking the format of the data in the database. They discover that they need an array of objects, but the Firebase identifier is causing issues. To fix this, they use the val method from Firebase that extracts a JavaScript value from a data snapshot. They then experiment with using object.values to convert the object into an array, which is the correct format for the OpenAI API. The instructor also mentions that the instruction object needs to be included in the conversation array before being sent to the API. They provide a challenge to add the instruction object to the array, but warn of an error that users will need to debug.
In this section, the instructor challenges the viewer to add the completion to the database and test that it is working. To do so, the viewer can utilize the push method and pass it the conversation in the database along with the object they want to push. The viewer can then ask the chatbot a question to verify that the database has been successfully updated. However, the instructor notes that upon refreshing the mini browser, a new conversation will be started without showing the existing conversation. Therefore, the viewer is tasked with creating a function called render conversation from DB, which will render any existing conversation in the database when the app loads. The function should get the conversation from the database as an array of objects, iterate over that array using a for each loop, and create a new speech bubble for each database object using document.createElement.
In this section of the video, the instructor demonstrates how to append the speech bubble to chatbot conversation by using a ternary operator to define two different CSS classes for speech based on whether the speaker is human or AI. The conversation is then automatically rendered based on the content in the database, with any potential malicious input prevented by using text content instead of inner HTML. The instructor also shows how to clear the conversation using the start over button with a single click event listener that calls the remove method on the database reference and resets the chatbot conversation to a hard-coded introductory message.
In this section, the course concludes with a discussion on how users can expand upon the foundations they have received in building a human language-capable chatbot using GPT-4 and the create chat completions endpoint. The instructor encourages students to think creatively and to develop chatbots with specific purposes for company or organizational use, such as a coding expert, a poetry generator, or academic assistant. The instructor also introduces the idea of fine-tuning AI models by uploading a dataset to address determined limitations with how AI models answer specific questions related to an organization or company beyond the training available from text data on the internet.
In this section of the video, the instructor walks through the process of converting a chatbot into a fine-tuned support bot for a fictional drone delivery company called We Wing It. The instructor makes aesthetic changes to the HTML and CSS of the chatbot before diving into the AI process. To fine-tune the chatbot, they need data in CSV or JSON format, which they will input into OpenAI's data preparation tool to ensure the format is correct. Next, they will upload this data to OpenAI and tell it to make their fine-tuned model. After this, they will make changes to the existing code to use the new model. OpenAI recommends needing at least a few hundred high-quality examples that have been vetted by human experts to fine-tune a model effectively.
In this section, the speaker discusses the formatting of data for OpenAI's GPT-3 model. OpenAI requires the data to be in JSON-L format with specific criteria, such as prompts ending with a separator and completions starting with a white space. The stop sequence is also mentioned, but it will be discussed later in the project. It's stated that writing JSON by hand is challenging, so the speaker organizes the data in a spreadsheet using CSV. Each row contains a prompt and completion. The speaker also shows an example of how prompts can have multiple parts with a short conversation involved. Finally, the speaker provides the data for download and instructs the viewer to save it in a specific folder.
In this section of the video, the presenter guides the viewer through the setup process to use the OpenAI data preparation tool. First, the command line interface environment needs to be set up, which requires Python 3 and pip package manager. The presenter provides commands to check if Python 3 and pip are installed and to install the OpenAI CLI using pip install. The OpenAI CLI is used to prepare the data, and the presenter explains how to navigate to the data folder and use the CLI to prepare the data. In case the missing pandas error occurs, the presenter provides a pip install command to install pandas.
In this section, the video tutorial explains how to prepare and fine-tune the data to build an AI chatbot using the OpenAI API and DaVinci model. The tutorial shows how to use the OpenAI CLI tool to convert a CSV file to JSONL format by adding a suffix separator and ending to each prompt and completion. Once the data is prepared, the DaVinci model can be fine-tuned by using the CLI tool to create a new model and specify the location of the prepared JSONL file and the base model to use (DaVinci). The tutorial notes that it may take some time for the fine-tuning process to complete and it's common for the live updates to be disconnected, but users can reconnect by running the command again. The tutorial also explains that the conversation array needs to be changed to a conversation string for the DaVinci model and provides mini challenges for users to implement this change.
In this section of the video, the instructor walks learners through updating the conversation string with just the user's input using JavaScript. They explain how to convert the conversation array to conversation string and use "+=" instead of push to update it with the user's input. After this, the instructor challenges learners to swap out the GPT-4 model for their fine-tuned model and change the "messages" property to "prompt." They then guide learners through logging out the response to test the API call and preparing for further JavaScript changes in the next section.
In this section, the instructor updates the conversation string to include the completion and sends it to render the typewriter text. They then explain that since they are using a DaVinci model, they need to set max tokens to a much higher number. When they do this, the chatbot produces nonsensical gibberish, so they check the settings, which are almost at their default values, and decide to lower the temperature to zero. However, the completion is still weird, so they conclude that a low temperature will be useful. The instructor then goes back to the criteria for the format of the data and notes that each prompt ends with a separator to inform the model when the prompt ends and the completion begins. They verify that this separator is added to the end of the prompt in the JSONL file but not in the conversation string, which they suspect is causing some of the issues. The instructor challenges the viewers to add the arrow separator to the end of the prompt as it is added to the conversation string, with a space before the arrow separator.
In this section, the instructor reviews the formatting criteria for the data used by ChatGPT, specifically focusing on the second criterion that requires each completion to start with a single white space. After attempting to add the whitespace through the fetch reply function to no avail, the instructor moves on to the third criterion, which requires each completion to end with a stop sequence. This stop sequence is explained as an optional setting that tells the API when to stop generating tokens, and an example is given where a stop sequence is used to limit the number of items returned from an API request. The instructor demonstrates how to add a stop sequence to a completion and how it affects the returned results.
In this section of the video, the instructor explains the importance of adding a stop sequence to a chat bot's responses to prevent the bot from having a conversation with itself and generating bizarre and illogical answers until it hits the token limit. He then challenges viewers to add the new line character and separator as a stop sequence and add a suffix of backslash N to all completions to make it clear when a completion ends. The instructor demonstrates the effectiveness of the stop sequence by testing the bot with questions from the data and seeing a significant improvement in the quality of responses. However, he notes that the bot is still experiencing too many hallucinations.
In this section, the instructor discusses the concept of epochs and how it can help with fine-tuning a small dataset. An epoch is the number of times OpenAI cycles through the training dataset when fine-tuning a model. By default, it completes four epochs, but for smaller datasets, this might not be enough. To get better results with smaller datasets, one can increase the number of epochs, but this would cost more time and money. The instructor challenges the users to use OpenAI CLI tool to build a new fine-tuned model with N epochs set to 16 and test it by asking it questions like phone number and email. Once the process is complete, the new model can replace the old one, and users can test it using different prompts.
In this section of the video, the presenter discusses the limitations and strengths of the chatbot created using ChatGPT and OpenAI. The chatbot is able to provide logical and rational answers, but is not conversational and lacks humanity. This is due to the limited amount of data used to train the model. However, the chatbot demonstrates the potential of the technology, and with more quality data, it could be improved further. The presenter then explains the process of deploying the support bot on the internet using Netlify, while keeping the API key hidden. This is done by sending requests to a serverless function with access to the API key, rather than storing it on the front-end, thus preventing it from being visible in the DevTools Network tab.
In this section, the tutorial covers how to store the project locally, install the project's dependencies, publish the code to GitHub, and set up a Netlify account to deploy the app. The tutorial emphasizes the importance of removing or ignoring the API key within the code. Next, the tutorial instructs how to store the API key in a Netlify environment variable and adds an environment variable in the Netlify settings with the key as "open AI API key" and the actual API key value.
In this section of the full course for beginners on building AI apps with ChatGPT, DALL-E, and GPT-4, the instructor explains how to store the API key as an environment variable in Netlify. However, since the API key can still be visible in the front-end, the instructor explains the need for a serverless function to make the API call away from the front-end. To set up this serverless function, the Netlify CLI needs to be installed, and users must follow a series of questions to connect the directory to an existing Netlify site. The CLI can then provide a boilerplate for a serverless function, which can be adapted to make requests to this new endpoint instead of directly calling the open AI API.
In this section of the video, the instructor explains that instead of directly calling the Open AI API from the frontend, they will now call the Netlify serverless function. The viewers are given a challenge to update the fetch reply function by making a fetch request to the URL via POST, holding the conversation string in the body and saving the response to a const before logging it out. Once they have done this, they can copy and paste the updated fetch reply function to VS code, and delete any unnecessary code from index.js before pushing the changes to GitHub to trigger a redeploy. The instructor then explains the steps to complete the challenge and delete all the code that is no longer needed.
In this section of the course, the instructor explains how to bring in OpenAI to the serverless function using the dependencies installed through npm. He goes through uncommenting code from index.js which is necessary for using OpenAI. The API key is stored in the Netlify environment variable and can be accessed using "process.env". To use the OpenAI inside the serverless function, the conversation string needs to be accessed by taking the event parameter as input and replacing conversation string with event.body. In the return statement, a key-value pair needs to be added where the key is reply and the value is response.data. Finally, the last step is to update the two commented lines of code to render the completion fetched from OpenAI API.
In this section, the instructor explains how to hide the API key when deploying the open AI project on the live internet using Netlify. The endpoint will only accept fetch requests from its own domain, and the cause policy can be used to allow other domains to access the endpoint if required. The instructor gives a quick recap of the topics covered, including using the open AI API, prompt engineering, building chatbots, and fine-tuning a model using own data. Suggestions for where to go next with AI include finding a need for AI and building an app, as well as keeping an eye on the AI scene with new developments in text, images, voice, and video. The instructor encourages students to share their projects on Scrimba's Discord server and provides a Twitter handle for further communication.
No videos found.
No related videos found.
No music found.