Deploying Rasa Chatbot on Heroku Using Docker

DLMade
Analytics Vidhya
Published in
4 min readApr 6, 2020

--

Deployment

Welcome to the sequel of my first blog where I was talking about integrating Rasa chatbot with Django.

When I was trying to deploy my Rasa chatbot, I found a lot of challenges and the inadequacy of tutorials out there didn’t make it easier for me. Heroku a top choice for many development projects especially in python applications because of its easiness to use. Heroku also provides a free plan without asking for your credit card. This helps junior developers to create “hobby” apps or those small apps that don’t need large scalability.

Deploying a Rasa chatbot on the Heroku free tier is quite tricky. The Heroku free tier comes with a limited memory, it gives only 512mb free RAM. Rasa chatbot together with its dependencies tend to take a lot of memory exceeding the limit. To mitigate this limitation there comes Docker.

Docker is a concept of containerization that has been gaining trends in the DevOps community. Ever heard the good old joke in the developers’ chatroom “NEVER DEPLOY ON FRIDAYS?”

Well follow the steps below in deploying Rasa chatbot on Heroku, you won’t have that old developer’s curse.

Prerequisites

  1. Heroku CLI

You have to install Heroku CLI. which installation steps can be easily found here https://devcenter.heroku.com/articles/heroku-cli

2. Docker

https://docs.docker.com/get-docker/

Step 1: Create a rasa project

Open your terminal where you wish to place your Rasa chatbot and run the command:

rasa init

This command creates a sample chatbot for a start with some sample data.

Step 2: Create a docker file

Open your favorite editor and paste the following code. Remember to save the file as “Dockerfile”

FROM ubuntu:18.04ENTRYPOINT []RUN apt-get update && apt-get install -y python3 python3-pip && python3 -m pip install --no-cache --upgrade pip && pip3 install --no-cache rasa==1.5.3ADD . /app/
RUN chmod +x /app/start_services.sh
CMD /app/start_services.sh

These commands are clear if you have some basic knowledge on docker or Linux commands. The last command of this file is explained in step 3.

Step 3: Create a start_services.sh file

We create a file named it start_services.sh and paste the lines below

cd app/# Start rasa server with nlu modelrasa run --model models --enable-api --cors "*" --debug \
-p $PORT

This file gives commands that Heroku will run when accessing the project.

With that said and done, now lets deploy on Heroku!

Open your terminal where you have your rasa project and run the following commands

git initgit add .git commit -m "init commit"heroku loginheroku create your_website_name ## Change your_website_name with any nameheroku container:login ## If you get problem to run below commands then add parameter 
-a appname where appname is your heroku app name Ex.heroku container:push web -a appname
heroku container:push web ## Till this will execute you can take your coffeeheroku container:release web

Now Your chatbot deployment is Done. Access your project with the URL you specified on Heroku.

You can able to access your chatbot via API calls.

As an example, create a python file and run the script below:

import requestsurl = 'https://rasablog.herokuapp.com/webhooks/rest/webhook' ##change rasablog with your app namemyobj = {
"message": "hi",
"sender": 1,
}
x = requests.post(url, json = myobj)
print(x.text)

The expected output is:

[{“recipient_id”:”1",”text”:”Hey! How are you?”}]

That’s all for deploying Rasa chatbot on Heroku, keep an eye on the sequel of this blog where we will deploy Rasa chatbot integrated with Django web framework on Heroku. Till then, Happy Coding!!!

If you like this post, HIT Buy me a coffee! Thanks for reading.😊

Your every small contribution will encourage me to create more content like this.

--

--

DLMade
Analytics Vidhya

Howdy & Welcome. I am a content creator, machine learning researcher, and consultant. consultancy: dlmadeblog@gmail.com