Global visionaries on the rise: Margriet Groenendijk on AI to help the world

Table Of Contents

Over the last few years a new breed of tech enthusiasts is paving a new approach to everyday life. Technological tools are freeing our minds and expanding our capabilities, creating solutions to critical problems that have previously appeared unsolvable.

We spoke with Margriet Groenendijk, IBM developer advocate about her interest in climate change, the need for unbiased AI, a plethora of IBM tools and solutions to enhance agriculture-related technology, or AgTech, in developing countries.

Margriet’s Algebra

Margriet Groenendijk is Dutch and holds a Ph.D. in environmental sciences. She previously worked as a climate researcher in the UK, and is now part of the IBM staff on data and AI in London. Her first love was environmental science and building models to understand the world. You never forget your first love, but you meet other kinds of love and currently, Margriet seems to be in love with artificial intelligence.

We live in a time when coding bits of knowledge can appear to be the most important skill. The basics then is knowledge of one or more programming languages, which act as building blocks to erect solid foundations and robust walls to advance society. Margriet contends, “I don’t think programming is the only foundation”.

“In artificial intelligence and data science especially, what you need is a basic understanding of mathematics. Calculus, linear algebra, and statistics cannot be eased only through programming techniques. You have to understand data to be able to build models with dedicated tools such as Python, R, Scala or any other suitable language, if you want to have a place in the world to come, starting from today”.

AI using online tools

AI is a way to infuse intelligence through the appropriate tools and processes, as IBM’s Hillery Hunter briefly but clearly introduces in this presentation to the latest tolls and developments in the cloud. Once you understand data science and the related modeling approach, then you can find your way through programming skills. “I use Python, a language well suited for AI algorithms”. Machine learning and deep learning algorithms need a lot of computing resources, while maintenance and version control can be complex to achieve. This kind of management is not easy for all data scientists, or software developers.

Most parts of a job need a team approach with shared platforms that are perfectly aligned, a result that cannot be achieved if each developer writes their code on their own hardware independently, with their own updates. That’s why cloud-based platforms are becoming more useful and more mainstream. “I use the IBM Cloud and Watson Studio daily, to analyze data and build models in Jupyter notebooks, as it is very flexible and has several easy to use tools to get better results with less programming effort”, shares Margriet.

IBM Watson Studio provides you with data assets and analytical assets, forming an environment to collaboratively solve business problems. You can choose the tools you need to analyse and visualize data, to cleanse and shape data, to ingest streaming data, or to create and train machine learning models.

AI fairness, explainability, and adversariality

IBM strongly believes in integrating AI in everyday solutions to improve results through the whole chain. Talking about Watson Studio brings us to the 360 effort, a complete strategy around the main issues that make AI a suitable technology for business. The three pillars of the IBM AI building are fairness, explainability, and adversariality. These are developed as open-source projects: AIF360, AIX360, and ART360 that you can use in any of your projects. IBM provides sample information and resources.

Join Call for Code to make a difference

A great list of freely-available data sets about the environment, energy, weather, and more is a ready resource: it is worth a visit. If you are curious about what you can do starting with good datasets and coding with AI-based technologies. To contribute to a better world, your place is the 2020 Call for Code Global challenge.

AI can be cheated

One of the problems with database images used as train data sets is that it’s easy to change the meaning of what is seen by the algorithms. A well known example is where the self-driving algorithm of an automated car is changed (cheated) by putting a small black tape close to the numbers of a speed plate.

This simple adversarial action confuses the algorithm and the relative speed constraint, setting them to a different value or even to unset all speed limits. It’s also easy to add noise to a normal voice in order to fail voice assistants: you ask for something, but if somebody adds some engineered noise, the assistant can be fooled to understand a totally different request or question.

The adversariality of image elements is something most of us don’t even are aware of, but it’s one of current Margriet’s scientific interests. There are several types of attacks. For example, when the images used for training a ML model are changed, we are talking about a “poisoning” attack. With an “evasive” attack, on the other side, the images shown to a trained model are slightly manipulated as in the above example of road signs. The attack to data can happen in any element of the image, starting with its name, or its relative meta-data. IBM is working on this point with the help of ART360, IBM Cloud’s Adversarial Robustness Toolbox.

AI must be unbiased

Evaluating machine learning models for bias is a great point in today’s research. A short video tutorial on how to use IBM Watson OpenScale to automate and operationalize AI is online at this address.

The analysis of bias in AI algorithms is probably the most important point in Margriet’s work. There are many examples, the most impressive one possibly being the justice algorithm that demonstrates discrimination towards a black man but not a white man, or a woman compared to a man, in the same baselinel conditions.

The algorithm was biased because it did not demonstrate “a balance for the false positives.” The way fairness was defined kept many African-Americans from going on parole and home to their families: we should not use historical data to train AI-based risk assessment tools, because they were collected with many prejudices and this brought too big mistakes.

Some freethinkers recently posed the problem of letting mostly men — not women — programming most AIs, thus inferring in them the competitive bias typical for men. Men are overrepresented in AI which leads to biases in how it is researched. “It’s not a problem of skin color or sex, but biases exist and have to be solved if we want to progress”, summons Margriet. The only choice is to eliminate all kinds of biases, making the most of our algorithms.

Decision making in many areas can be flawed, and it’s shaped by our individual and social biases which are often unconscious. We thought that using machine learning to automate decisions, would result in fairness for all. But we now know that’s not the case.

As a software developer, you often hear of algorithmic bias. But the main source of bias can be most often found inside not the algorithm, but the underlying data. Models may be trained on data containing biased human decisions, or on data that reflects the effects of societal or historical inequities.

You can look for bias in the training data when developing your code, and try to remove this by pre-processing the data before training the model. During the model training, you can add further constraints to the model: for instance, in financial decision-making, older and younger people should have the same acceptance rate for a loan, while today they have different rates due to a bias in age. And lastly, bias can be corrected after the model is trained. The AIF360 toolkit explains how to implement all these methods.

Chatbots are great answers

A bias-free environment will be the best way to foster all potential from artificial intelligence algorithms. Self-driving cars or social-management applications are frequently seen on the new and on the other media, but many direct applications are leveraging the global wind of software development.

“Chatbot is still the most intriguing solution to me”, smiles Margriet; “It’s incredible how many things can be done with this application of AI techniques”. Textual chatbots are frequently being used as a replacement or addition for banking and insurance websites. There are plenty of voice-based chatbots, as suggested by all the voice assistants already on the market. They appear mostly as standalone devices but are soon to be integrated into more complex devices — luxury semi-autonomous car manufacturers are working in this area. Building a chatbot is straightforward with Watson Assistant, as proven by the chatbot created as part of the Call for Code hackathon to help with crisis communication.

The 2020 Call for Code Global Challenge is where your dreams come true

Talking to Margriet, a question arises about what participants get out of the Call For Code 2020 challenge. Anyone can enter the challenge. There is a kind of excitement coming from participating in these events: you and your team start from scratch to design solutions with the best-of-breed components, use them in both usual and unusual ways, inventing new techniques and materials, addressing complex environmental, medical and social challenges arising from climate change or COVID-19. You can truly help the world!

Margriet’s dream is feeding the world

Margriet’s work today is in large part dedicated to inspiring people to solve problems. As a data scientist, she still spends a long time improving her skills. “I will never forget my deep interest in the climate change on the Earth. For the Call for Code challenge I am building a simple flood model to inspire attendees”. Floods impact water supplies, and water affects crops. “I’m also working on a simple crop model to, for example, advise when to seed and harvest crops, and I am also providing starter kits for communities in need”.

That’s where calculus, globality, and AI bring inspired people. Paraphrasing a well-known book title from social science-fiction author Philips Dick, and updating his doubt to present days’ data and coding offer: do AI-based androids dream of AI-driven sheep?

If you want to discover more about IBM Cloud, click here!

You can read the orginal version of this article at, where you will find more related contents.



Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store