Welcome to the final part of the Rasa Advanced Deployment Series. As a recap, we learned installing rasa into VM, scaling pods, connecting our git repo into rasa-x, registering custom actions, and starting them. We also learned about Conversation driven development. Now we will learn about CI/CD. Even though developing a contextual assistant is different from developing traditional software, we should still follow software development best practices. Setting up a Continuous Integration (CI) and Continuous Deployment (CD) pipeline ensures that incremental updates to your bot are improving it, not harming it.
Continuous Integration (CI) is the practice of merging in…
Welcome to the 5th part of the Rasa Advanced Development Series. Here, we will talk about Conversation driven development.
Conversation-Driven Development (CDD) is the process of listening to your users and using those insights to improve your AI assistant. It is the overarching best practice approach for chatbot development.
Developing great AI assistants is challenging because users will always say something you didn’t anticipate. The principle behind CDD is that in every conversation users are telling you — in their own words — exactly what they want. …
If you are familiar with rasa then you know that we can’t achieve everything without creating custom action.
Custom actions turn your assistants into magic. A custom action is where you put most of your assistant’s code written in Python. This can be used to make an API call or to query a database for example.
So let’s create your custom action server.
For running the custom action first we have to connect the git repo and we know how to do it the same as we talked about in the previous blog.
Do the following steps to connect the…
Till now we learned about installing rasa in VM and scaling pods. Now let’s connect our chatbot with git so we can keep track of changes and sync rasa-x data with the git. So that rasa-x automatically updates data if it finds any changes in the master branch.
Here are the steps to connect the git repo with the Rasa-x.
Step 1: Fork https://github.com/RasaHQ/deployment-workshop-bot-1
Step 2: Navigate to your forked copy
Step 3: Copy the SSH URL from your GitHub repository
Step 4: Connect Integrated Version Control in Rasa X
In the previous blog, we installed rasa in the VM. The installation is done with only one Pod for each deployment running in our application. When traffic increases, we will need to scale the application to keep up with user demand.
Scaling is accomplished by changing the number of replicas in a Deployment.
On the VM, Issue this command to check how much pods we have for our deployments:
kubectl -n my-namespace get deployment
Deployment is the next goal after building your chatbot. But after deployment, you might want to update it with time and it’s also a most important step when you going to build a conversational ai.
We are going to learn the steps how can we deploy our rasa chatbot on the server, connect with a git, and update it with changes done over time.
You can apply on the google cloud platform for free credits by this link https://cloud.google.com/free .
I am using windows as my server VM. You can use any os.
Docker Dekstop (https://docs.docker.com/docker-for-windows/install/)
Helm Chart (https://github.com/helm/helm/releases)
Reading documents with OCR can sometimes be very tough to deal with when accuracy is concerned. So we need to do some preprocessing before we feed the image to the OCR.
Here are the steps which I am performing before doing OCR.
Step 1: Proper dimensions
I am using tesseract so it's better that our image is store in 300 DPI. If your image has more than 300 words so it’s better to make your image dimension around 2500 * 2500.
image = Image.open(filename) image = image.convert(mode='L') factor = max(1, float(2500.0 / length_x)) if factor>1: size = int(factor * length_x)…
Ever wanted to create a chatbot for your website or business? Then this blog is for you. so let’s see step by step the process of how we can create a chatbot and integrate with the website.
First of all, we will create a new project and environment. For that open terminal and type, the below commands
Now let’s activate the environment and install rasa-x
pip install rasa-x==0.34.0 --extra-index-url https://pypi.rasa.com/simple
Now we have things set up so we can start to build our chatbot. Rasa initially gives a sample chatbot. …
In this blog we will look how to process SROIE dataset and train PICK-pytorch to get key information from invoice.
Here is colab notebook click here to direct run tutorial code
For invoice dataset we are using ICDAR 2019 Robust Reading Challenge on Scanned Receipts OCR and Information Extraction Compitition Dataset.
The Open Neural Network Exchange (ONNX) is an open-source artificial intelligence ecosystem that allows us to exchange deep learning models. This help us to make model portable.
At the high level onnx allow us to move our model in diffrent deep learning framework currently there is native support in ONNX for PyTorch, CNTK, MXNet, and Caffe2 but there are also converters for TensorFlow and CoreML. Also ONNX makes it easier to access hardware optimizations.
For this blog we will look athow to convert pytorch model into onnx format and inference into cpu systems.
Following is list of providers you can…