Andrea Asperti

I am currently working in deep learning, and deep reinforcement learning.

Some of my recent works have been focused on the following topics:
Playing Rogue's with reinforcement learning techniques. Rogue is a famous dungeon-crawling video-game of the 80ies, the ancestor of its gender. Rogue-like games are known for the necessity to explore partially observable and randomly-generated labyrinths, preventing any form of level replay. As such, they serve as a very natural and challenging task for reinforcement learning, requiring the acquisition of complex, non-reactive behaviors involving memory and planning.

beef tartare Automatic point-of-interest image cropping via ensembled convolutionalization Convolutionalization of discriminative neural networks, introduced by J.Long et al. for segmentation purposes, is a simple technique allowing to generate heat-maps relative to the location of a given object in a larger image. We apply this technique to automatically crop images at their actual point of interest. The use of an ensemble of fully convolutional nets sensibly reduce the risk of overfitting, resulting in reasonably accurate croppings.

beef tartare Detection of Gastrointestinal Diseases from Endoscopical Images The lack, due to privacy concerns, of large public databases of medical pathologies is a well-known and major problem, substantially hindering the application of deep learning techniques in this field. In this research, we investigate the possibility to supply to the deficiency in the number of data by means of data augmentation techniques.

Previous Research

Sometimes, I am a bit puzzled myself by the different topics I have been working on during my scientifc career. So, to get some sense out of it, I decided to draw a picture.

In the end, I think there is a clear line of development, I was not entirely aware of.
I have always been interested in machine intelligence, but my first studies have been on the logical side: lambda calculus, type theory, category theory.
From this I got involved in linear logic, optimal reduction (the BOHM machine), implicit computational complexity.
Then I moved to more concrete topics: mathematical knowledge representation and mechanization of formal reasoning (see my Interactive Prover Matita).
Neural Networks always intrigued me, but with the advent of deep learning I also decided to devote some research effort in it, and his teaching.
This does not mean I am abjuring my old topics. Actually, I think that the integration between machine learning and deduction remains one of the big scientific challenges for the future: let the machine learn to prove theorems or equivalently, by the Curry-Howard analogy, to write its own programs.