We are creating a unique product which is supposed to make machine learning available for any type of users. Today we all witness a blast with using AI technologies everywhere and at the same time almost all similar solutions are created by very rare and expensive specialists. Obviously this approach is becoming obsolete and in order to make a vast AI transformation, there should be a full automation of work for data scientists.
We first thought about this concept some years ago and have created our unique product. Neuton.ai () – this is a cloud-based AUTO ML platform, which is built based on our own patented neural framework. This short video demonstrates an example of how easy it may be now to solve a challenge of forecasting customer churn.
Out technical stack: C/C++, CUDA; data processing: Python, numpy, pandas, sklearn, nltk, xgboost, catboost, lightgbm, tensorflow, torch. Platform: Java, Spring Boot, Spring Data, JPA/Hibernate, RabbitMQ, Vue.js (front).
Infrastructure: Git, Jenkins, Ansible, Terraform, GCP, GKE(kubernetes), Docker, Keycloak, Grafana, Openstack
We presented our product less than a year ago, and already hundreds of companies in the US are using our solution. Extremely high demand for AUTO ML products is making us move forward.
We are looking for talented people with intimate knowledge’s of the data science field and who share our ambitions to make machine learning truly available for everyone: people who are far away from math and do not code, small and medium businesses, pupils, students, startups, nonprofit organizations. This is millions of people around the world for whom we need to compose best DS practices in 3 buttons. International team from Europe, USA, CIF and Canada.
- Engage in work on automating data preprocessing & feature engineering methods,
- Learn new approaches
- Conduct experiments, write a lot of code.
- 5 years in an applied data science role
- Fluency with scripting (Python or R)
- Hands-on experience with pandas, numpy, sklearn, matplotlib, seaborn, xgboost or R corresponding packages
- Ability to independently perform complete machine learning projects including data cleaning/processing, EDA, hypothesis generation/testing, feature engineering, algorithm selection/tuning/validation, and generation of experiment reports
- Knowledge of metrics definitions and differences, and metrics optimization techniques
- Understanding of Logistic/Linear regression, Decision Trees/Random Forest/Boosted Trees/ neural networks frameworks
- Ability to explain strong and weak points of the above algorithms’ applications
- Strong knowledge of Statistics
- Exceptional verbal communication, writing and interpersonal skills
- Strong project management skills
Good to have:
- Familiarity with BigData, Hadoop, CUDA
- Fluent in English
- Innovative state of the art SaaS product development
- Stellar team of Data Science professionals
- Competitive compensation & benefits package
- Work with newest algorithms, NN frameworks
- Participation in the TOP DS community events
- Remote work option