NeuralCeption started out of our weekly paper study group when we thought that our discussions might be useful to others too. So in 2020, we decided to write a series about Adversarial Examples, the effect where neural network classifiers can be misled by adding imperceptible perturbations to images.

Out of this first series followed a series about object detection, various posts on other related topics as well as projects which aim to provide step-by-step tutorials.

Who we are

We are two engineers fascinated by AI based in Canada and the Netherlands. We have created this website to present our findings on the exciting topics of adversarial examples, object detection, and more.

David Kirchhoff image-left
Software Engineer
I am a passionate learner and fascinated by the power of deep learning.





Philip Hoang image-left
Data Scientist
Based in Toronto, I am a mechanical engineer turned data scientist and find a lot of creativity in deep learning to solve problems and creating art. I enjoy hiking, traveling, playing the guitar, and listening to podcasts about economics while walking my corgi.

What motivates us

Studying adversarial examples raised our awareness of how differently humans and machines approach a vision task. Computer vision systems already outperform humans on specific tasks like pneumonia detection from x-rays. These tasks however are quite narrow and require a lot of examples to learn these representations.

Humans seem to rely more on common sense when approaching a vision task. We think this would also be the best defense against adversarial examples. With that we mean that a human for example would infer from the situation of a manipulated stop sign on an intersection that there is something “wrong” with the stop sign and act accordingly.

We are very excited about how the field of computer vision will further evolve.