What happens in Vegas, must not always stay in Vegas!

Sep 20 | 05 min read

Share:

Our Chief Data Officer, Rohit Agarwal, recently attended a conference “Predictive Analytics World” in Las Vegas, U.S. We are proud to say that he presented a talk at the conference and was also a part of the panel discussion. He believes that conferences in general are a good way to understand where the technology is headed, and what are some of the practical issues that people are facing while using the technology. It also helps in showcasing the impact that we are creating through technology in our organization.

The rest of this article is all Rohit taking you through his experience of attending the conference.

About the conference

The conference “Predictive Analysis World” is hosted 3 times a year and is one of the most sought-after conferences among ML/AI enthusiasts. This time around, 700+ folks attended the conference from all over the world. The conference was focused on 4 major tracks i.e. Finance, Healthcare, Deep Learning, and Business Applications.

The conference was full of wisdom shared by industry leaders from Silicon Valley. The speakers at the conference were from Google, Microsoft, InstaCart, Oracle, Nvidia, Bill.com, John Snow Labs, HP, etc.  It was an amazing experience listening to them.

About the Talk

I also got an opportunity to present at the conference about our work. My talk was titled Pull Requests Analytics: A quantitative way for performance evaluation of Development Team”.

Below is the abstract that I submitted:

“Pull requests (PRs) have become de facto standard for code review/merge process by development teams using SCM tools like Github and Bitbucket. PR is a rich source of information about developers & reviewers. PRs can give us quite a lot of insights about the coding styles, and logical skills of the developers as every single line of code is being reviewed and bad smells are getting highlighted by the reviewer. The comments/suggestions that the reviewer gives, help in understanding the proficiency of the reviewer. We have developed a set of PR Analytics by applying Transformers-based NLP, Decision Trees & Statistical Analysis on PR data. PR Analytics can be used to perform skill assessments in order to find out the areas of improvement for the development team in a quantitative manner. PR Analytics can also help the Scrum masters & the project managers to better plan their deliverables since now they know the strengths & weaknesses of the development team and can allocate the right developers for the right type of tasks.”

It was a 45 min talk where I covered the background about the scorecard and then did a deep dive on how we categorize the comments that are given as part of the PR review. The focus was on NLP techniques hence the comments classification was given more focus.
It was great to see the level of interest that the audience showcased in my talk. Most of the audience members were amazed by the fact that Pull Requests could be used for the purpose of qualifying the work. People were curious to know about how we came up with the idea and whether is it currently being used in the organization or not. People were also curious to know whether there was any resistance from the development teams about being evaluated this way.

Panel Discussion

I also got an opportunity to be a part of a panel discussion on the topic “Will DL take over ML entirely or will it always remain overkill for some projects”.

The first experience of being a part of any panel discussion was simply amazing. In total, there were 3-panel members and we had to present our thoughts on the topic followed by a 30 mins Q&A from the audience. 

Some of the points that I discussed as a panel member:

  • In my opinion, traditional machine learning can solve a lot of business problems and deep learning is becoming more of a fad that you should have. A lot of times consultancy companies are promoting these solutions to be a part of the game.
  • In India, the retail industry is still not very digitized (except for e-commerce). We work on a lot of Intelligent solutions like “suggested orders” i.e. how much inventory should be delivered to a particular outlet by the brand. For this to happen, we need to have data for the outlet, its customer base, unsold inventory, order frequency, and so on. But where is the data?  We can’t take a DeepAR or LSTM  model to perform predictive analysis and say this is the inventory. The data is simply not there.
  • Most of the time it needs domain understanding to keep things in perspective which comes into the picture when we perform feature engineering. With deep learning, it might ultimately figure out the features that are interesting but it depends on the quality of the data and the amount of data. Usually, they are not present. For deep learning to work, it needs data but in most cases, especially in the retail business (brick and mortar), the data is not present. Almost 90% of Indian retail is offline. 
  • It’s also about the general acceptance of the results. The model’s explainability is important in the case where money is on the line. For e.g. Whether to give a personal loan or not? This might need more than a “yes” or “no” output to take the decision. We have seen in the retail industry, that they want to use the models but to do demand forecasting or smart merchandising, they want to understand how we came up with the answer. They want to control the parameter. I don’t think DL will give that liberty.
  • Any new business who wants to try intelligent tools wants to first understand how it is done before getting fully on board. So, it is going to be first ML and then DL with caveats in place.
  • However there are places where DL shines and ML can’t do much. For ex: in MT or image processing, there is no option but to accept DL.

One Don’t:If the customer is just starting up, showing them DL and complex algos will scare them away. Start with simple ML algos that bring out some improvements in the existing ways and then once the confidence is built, start giving them complex models and so on.

One Do: Use DL when you are dealing with a lot of different data sets that you have no idea what is good and what is bad. Basically, when you don’t have any expertise and still want to come up with a reasonable output. 

Overall, the experience as a panel member was really great and I hope to be a part of such panel discussions in the future as well.

Other talks and presentations in the conference

This conference was like a gold mine of knowledge and wisdom. Some of the other talks and discussions were around.
  • Forecasting: Demand Forecasting and Stock Price Forecasting
  • Beyond regular OCR, how to extract the information from documents
  • Improvement of recommendations given by the Recommendation Engines
  • How NLP can be used to effectively extract the entities and their relationships from the documents?
So, this was all that happened in Vegas this time around with me. It was an amazing experience to gather all the information and wisdom that the conference has to offer. Moreover, I connected with so many like-minded people who are a part of my network now.
Read more stories

Step up and be a hero!

In order to become a hero, first you need to step up and Harsh Jeetendra has been proving this point time and again.

Becoming one of the clients’ favourite trainers

This article is all Jatin describing his journey from being extremely nervous in his first week at Bizom to becoming one of the clients’ favourite trainers.

Letting go of the preconditioning for good

In this article, Vinay, AVP – Business development & Customer operations describes how he had let go of his preconditioning after joining Bizom and how it all turned out for him

Share this:

Like this:

Like Loading...
%d bloggers like this: