Trip planner: End-to-end Gpt project with Flask and Lambda

DLMade
7 min readJul 9, 2023

--

Hi Everyone, have you ever come across a situation where a potential client or employer asked you to showcase your online portfolio in order to get a gig and you did not have one? I can relate.

As a freelancer, I’ve connected with many clients. However, it’s not easy to gain their trust without an online portfolio of past work. That’s why I decided to create a portfolio and deploy it on the cloud. One major problem with cloud deployment is the cost incurred. I was determined not to spend an excessive amount of money on my portfolio demo website. Consequently, I began exploring an alternative approach that could offer me free-of-charge cloud environments.

Finally, I figured out that if somehow I could manage to deploy my website on AWS lambda, it will be an incredibly cost-effective solution as AWS provides a certain amount of free requests and computing resources. These make it an ideal choice for my portfolio website. Since this website is primarily intended for client demos, it is unlikely to receive a significant amount of traffic.

So, Now I decided to create a small application with the help of OpenAI API using Python Flask and deploy it on AWS Lambda to showcase as my portfolio. In this blog, we will explore the entire process step by step in detail. Buckle up and let’s take a walkthrough.

Requirement

  • AWS account
  • Python

In this tutorial, our main focus will be on the backend and deployment aspects, rather than the user interface side.

Development

Here is the Github link to the Flask application for travel itinerary planners.

Let’s begin with the Python code which will do the magic when we deploy the code on a cloud.

##https://github.com/dlmade/trip-planner-blog/blob/main/app.py
from flask import Flask, render_template, request, jsonify
from chat_completions import generate_response

app = Flask(__name__,
static_url_path='',
static_folder='templates',)

@app.route('/')
def index():
return render_template('index.html')


@app.route('/get_response', methods=['POST'])
def get_response():
user_query = request.form['query']
response = generate_response(user_query)
return jsonify({'response': response})


if __name__ == '__main__':
app.run(debug=True)

This is the Flask code where we initially render the index.html page. That will ask users about their destination place and days to create the travel itinerary.

## https://github.com/dlmade/trip-planner-blog/blob/main/chat_completions.py
import openai

# Replace the code below with your api key
openai.api_key = 'YOUR_OPENAI_KEY'

def generate_response(query):
# Your logic to generate response using OpenAI Chat Completions API
# Replace the code below with your implementation

temperature = 0.2
top_p = 1
engine = 'gpt-3.5-turbo'

messages=[
{"role": "system", "content": "You are a travel iternary planner. You will get the travel destination with the days. Need to create a iternary out of it. If you not able to understand the destination you can give error meesage."},
{"role": "user", "content": query}]
response = openai.ChatCompletion.create(
model=engine,
messages = messages,
# prompt=prompt,
temperature=temperature,
# max_tokens=min(4096-estimated_prompt_tokens, max_tokens),
top_p=top_p
)

response = response['choices'][0]['message']['content'].strip()

return response

After we get the user input we call a get_response function which will generate a travel plan with the help of the Openai. Make sure to replace YOUR_OPENAI_KEY before running the code.

These are the main two files which are needed to be changed as per our requirement. The GitHub readme file contains detailed instructions on how to run the code locally.

Deployment

Lambda

  1. Find Lambda in AWS console.

Once we have the local trip planner app running, we can deploy it. For that first, we need to open the AWS console and search for the lambda and click on it as per the below reference image.

Aws console

2. Create a lambda resource.

Once we click on lambda we can see the below-like screen. Now we have to hit the create function button.

Lambda

It will ask for basic details. We only have to provide the name and runtime of the lambda and leave the other value default as per the following image and create the function.

Now we have our Lambda resource ready. But we will require the code to upload which is in our GitHub repo.

3. Upload the code in AWS Lambda.

Let’s jump back to the code. GitHub Readme contains steps to create a zip. After zip creation, we can upload the zip file to lambda as per shown in the below image.

4. Modify the configurations of AWS Lambda.

We have a handler file in our zip which will be the main entry point for the lambda. So we need to point the lambda to our handler file. For that, we can click on the edit button and write “handler.lambda_handler” in the Handler textbox.

Also, make sure that we increase the timing of the lambda timeout to 15 minutes. As lambda have a 3 seconds default timeout which can break our website.

We have done the work on the lambda aspects. Now the question is how can we interact with Lambda from our browser? So for that, we will create the API Gateway.

API Gateway

  1. Find API Gateway.

For Create an API Gateway open the AWS console and search for “API gateway” then click on the first option. Now we will get to see the below-like screen, Click on the build button in the REST API box.

2. Create an API Gateway.

It will ask for the following details where we will provide the API name and leave other values default.

We have an API gateway resource created. Now we have to create the resources inside an API gateway to define the routes of the website.

3. Create Resources and Methods in an API Gateway

We will start with creating a method of Any. We can achieve this by following steps. We can also view below-given reference image for a better understanding.

1. Click on Actions.

2. Select Create method.

3. Select the ANY from given choice.

We can see the following screen where we need to enter the following details.

  • integration type: Lambda function
  • proxy integration: Make it selected
  • Lambda Region: We can select the region where we created lambda.
  • Lambda function: Either enter the name of the function or arn of the function.

Click on save which will create the main route where the initial request comes in.

We will require one more resource which can route all the proxies.

To create a proxy resource follow the following steps.

1. Click on ANY method that we recently created.

2. Press the actions button and select Create Resource.

3. Now select “Configure as proxy resource” and “Enable API Gateway CORS” and then hit create resource.

4. It will create the ANY method by default. So we need to provide details for ANY method. Where we provide a Lambda name and hit the SAVE button.

4. Deploy API

We have created all the routes which were required. Now we can deploy the API. For that first click on “/” Resource then hit the Actions and select deploy API.

We need to create a deployment stage before deploying API. We will give “dev” as the name of the stage and hit Deploy.

Finally, we have our URL where we can interact with the web application.

Magic✨

As soon as we hit that URL we can see our live portfolio ready to showcase the potential clients.

If you like this post, HIT Buy me a coffee! Thanks for reading.😊

Your every small contribution will encourage me to create more content like this.

--

--

DLMade

Howdy & Welcome. I am a content creator, machine learning researcher, and consultant. consultancy: dlmadeblog@gmail.com