“Developers, developers, we need more developers,” comes the chorus. But what does it take to train an engineer, especially an engineer familiar with Machine Learning?
We onboard a lot. Some of our people are apprentices just out of school, some are masters graduates. Our team is made of computer scientists and mathematicians, but also plant biologists and robotics specialists. We work to welcome people to Artificial Intelligence – the field that, broadly, together algorithms, machines and people. It’s part of our mission to fix the AI talent pipeline problem.
But we are not just a training programme, yet another MOOC, or a university. We focus on practice over theory, doing over learning, projects over certificates. Our engineers and AI Apprenticeship programme participants work on real-world industry projects. Through doing so, they learn how to design data architectures, build pipelines and features, and optimise algorithms. They learn to make trade-offs and, perhaps most importantly, learn to own and take pride in their work.
This approach has strengths and weaknesses. Each project is different, which means each learning experience is different. Our varied backgrounds means everyone has a different learning curve. So, we may not have best practices or answers. What we do have, however, are patterns and pain points we’ve noticed. After giving workshops everywhere from London, Singapore and Indonesia, these learning curves seem quite generalisable to most beginners. We hope this list gives a good heads-up for people with teams they need to grow.
Most learners are self-taught, which means working with a local computer or short bursts in a instance. With a production setup, the newbie developers have more choice. For example, is a P100 GPU really needed for a job or will a slower, but cheaper, K80 is enough? In fact, will a GPU actually be needed at all, or will adding more CPUs solve the problem? Understanding how hardware choice affects training time, or how a neural network architecture affects choice of hardware, is something a machine learning practitioner-in-training will have to learn.
Eventually, training on cloud compute instances can become too expensive. At this point, there are several options, all of which our novice has most probably yet to have heard of. They could submit a job to a supercomputing cluster, package their code in a container to run on a shared in-house NVIDIA DGX, or just ask for more cloud compute. This can open up a whole new world where AI meets High-Performance computing.
Optimising training, it’s a learning hill that almost everyone has to climb. It’s just a question of when.
To read part 2, click here.
To find out more about our AI Apprenticeship Programme, visit: https://aisingapore.org/aiap/