Most influential scientific books in my professional career

Andrea Asperti

I started a list of reviews on some of the scientific books I read during my life. They are not necessarily the best books I ever met, but those which have been most influential for my professional career. I have five in mind. For the moment I only had time to write a few of them.

Lineamenti di Logica Matematica. Ettore Casari, Feltrinelli, 1959.

I do not know why this book was in my parent's library. I do not even know if it had been bought by my mother, who was a mathematician, but loving calculus probably despised logic, or by my father, who was a physician, and as many physicians loved philosophy. In any case, I believe none of them ever went through it.

I discovered and read it when I was sixteen: so you may understand why I believe it was influential.

 I recall I had a terrible time in deciphering it: it was thousand miles away from any other mathematical text I previously met. My remembrance was that of a terrifically obscure and esoteric text. I went through it again, for the purpose of writing these notes, and I now understand that my impression was mostly due to my scientific immaturity and my difficulty to distinguish and appreciate the relevant facts behind the heavy logical formalism. Of course, Casari's style, dry and a bit pedantic, didn't quite help: the first chapter on propositional logic starts discussing the signs of the language, encoding the infinite succession of propositional variables as p*, p**, p***, and so on. The right thing to do, you will say: you need to start from the ground; but at the age of sixteen the necessity to formally define the signs of the language was really puzzling. A posteriori, I must admit that all basic notions are covered in the book: Hilbert-Bernays systems and natural deduction, normal forms, duality, deduction theorems, semantics, validity and completeness. It also discusses decidability issues, both for propositional and first order logic, and provides some hints to higher order logic and type theory. A good book, not meant for a bachelor student.

When I was young, I made a point of finishing any book I started, and I swear I read the whole of it. I would also say I was able to follow most of the reasonings, even if, at that time, I did not fully understand what was the actual point of most of those results, and Casari bewares of explaining them. I think that what induced me to pursue reading was the strange feeling of becoming part of a mysterious and secret society, that could at some point deliver magical artifacts to its initiates, revealing hidden mysteries of the universe.

Alas, I did the same mistake several years after, with Category Theory.

Symbolic Logic and Mechanical Theorem Proving Chin-Liang Chang and Richard Char-Tung Lee, Academic Press, 1973

I discoverd this book by chance, crawling in the dungeons of the library of the Scuola Normale Superiore di Pisa. After Casari, and several other more or less indigestible introductions to logic, this book was like a breath of fresh air: like reading Salinger after having just completed la "recherche" de Proust (kidding, love Proust, but you got the idea).
 You must understand that all people of my generation undertook studies in Computer Science with Asimov's laws of robotics in their mind: all we were interested in was to understand how we could put intelligence inside a machine (and at that time I had little doubt that intelligence meant logic). The book precisely explains that, discussing unification (a revelation!) and presenting many resolution based techniques, fearly discussing their advantages and drawbacks. Proofs of the validity and completeness of the different approaches are also provided, introducing techniques that were completely new to me, and very interesting. Algorithms are presented in a mathematically rigorous way (that was a novelty too, with respect to most books in computer science of the time), and yet in a way that was quickly amenable to being implemented on a machine. With first order unification, it was love at first sight. The possibility of creating structure as a synthesis of different, related notions had a magical fascination: it looked to me like a fundamental gnoseological principle.
The book is very old, and, in spite of the new material added in recent editions, I presume it is by now a bit obsolete. Still, if you are interested in the subject and plan to build your own theorem prover, it could be worth to have a look at it. From the point of view of my scientific interests, it largely filled a gap I had been looking in vain to satisfy with many different readings. At the same time, it also made apparent to me the practical complexity of the problem, and the intrinsic infeasibility of addressing deductive reasoning by brute force computation.

If we had to put intelligence inside a machine, it would probably be by different techniques.

Theory of Recursive Functions and Effective Computability.Hartley Rogers, MIT Press, (Original edition by McGraw-Hill), 1967

At the beginning of my third year of studies in Computer Science I was still struggling to understand what was the point of this discipline. So far, most of the courses had been a long list of taxonomies, definitions and protocols: you cannot build a science out of definitions. Even the course on Artifical Intelligence, that we had so eager to attend, ended up in a long list of pointless mumblings. Any time I saw a subject with a minimal scientific content I had the illusion that that could be, at last, the actual issue of computer science: maybe numerical analysis? maybe operational research?

 So, when I first met computability theory, it was a real bolt out of the blue: it was a metatheoretical mathematical distillate of what I really loved in Computer Science, that was neither representation nor communication, but computation. I had to go the reference book of the discipline, that was not hard to find: Hartley Rogers' "Theory of Recursive Functions and Effective Computability", the bible. I loved everything of that book: the style, the approach, the philosophy. Computability is one of the very few mathematical notions with an absolute content, providing a precise boundary to the limited capabilities of humans and machines, poor finitistic agents: some functions are computable, others are not. Starting from this remarkable fact, a huge universe of notions, methods and results can be derived, fixing precise limitations to the kind of problems that can be addressed by algorithmic means. The book rapidly passes through Turing machines and (primitive) recursive functions, quickly moving to more advanced and interesting topics like reducibility, creative sets, Post's problem, cylinders and hypersimple sets. The final part of the book covers degrees of unsolvability, the arithmetical hierarchy and the analytic hierarchy.
Of course, most of Computability Theory is nothing more than an apocryphal systematization of know results from logical studies of the thirties; its gold age has been in the fifties and sixties, and no much research is still going on, apart maybe from some interesting investigations on randomness. Possibly for this reason too, Roger's book still maintains, even after 50 years from its publishing, its freshness, vigour and clarity: it still remains the reference book for this discipline.