John V. Tucker

John Vivian Tucker (born 1952) is a British computer scientist and expert on computability theory, also known as recursion theory. Computability theory is about what can and cannot be computed by people and machines. His work has focused on generalising the classical theory to deal with all forms of discrete/digital and continuous/analogue data; and on using the generalisations as formal methods for system design; and on the interface between algorithms and physical equipment.

Biography

Born in Cardiff, Wales, he was educated at Bridgend Boys' Grammar School, where he was taught mathematics, logic and computing. He read mathematics at University of Warwick (BA in 1973), and studied mathematical logic and the foundations of computing at University of Bristol (MSc in 1974, PhD in 1977). He has held posts at Oslo University, The CWI Amsterdam, and at Bristol and Leeds Universities, before returning to Wales as Professor of Computer Science at Swansea University in 1989. In addition to theoretical computer science, Tucker also lectures on the history of computing and on the history of science and technology and Wales.

Tucker founded the British Colloquium for Theoretical Computer Science in 1985 and served as its president from its inception until 1992. He is a Fellow of the British Computer Society and editor of several international scientific journals and monograph series. At Swansea, he has been Head of Computer Science (1994–2008) and Head of Physical Sciences (2007–11). He is Member of Academia Europaea. Outside of Computer Science, Tucker is a Trustee of the Welsh think-tank, the Institute of Welsh Affairs and the chair of the Swansea Bay branch. He is also a Trustee of the South Wales Institute of Engineers Educational Trust.

Professor Tucker is married to Dr. T.E. Rihll, Reader in Ancient History at Swansea University.

Professor Tucker is a Founding Fellow of the Learned Society of Wales and in July 2010 he was appointed as its inaugural general secretary.

Work on computability and data types

Classical computability theory is based on the data types of strings or natural numbers. In general, data types, both discrete and continuous, are modelled by universal algebras, which are sets of data equipped with operations and tests. Tucker's theoretical work tackles the problems of: how to define or specify properties of the operations and tests of data types; how to program and reason with them; and how to implement them.

In a series of theorems and examples, starting in 1979, Jan Bergstra and Tucker established the expressive power of different types of equations and other algebraic formulae on any discrete data type. For example, they showed that

On any discrete data type, functions are definable as the unique solutions of small finite systems of equations if, and only if, they are computable by algorithms.

The results combined techniques of universal algebra and recursion theory, including term rewriting and Matiyasevich's theorem.

For the other problems, he and his co-workers have developed two independent disparate generalisations of classical computability/recursion theory, which are equivalent for many continuous data types.

The first generalisation, created with Jeffrey Zucker, focuses on imperative programming with abstract data types and covers specifications and verification using Hoare logic. For example, they showed that:

All computable functions on the real numbers are the unique solutions to a single finite system of algebraic formulae.

The second generalisation, created with Viggo Stoltenberg-Hansen, focuses on implementing data types using approximations contained in the ordered structures of domain theory.

The general theories have been applied as formal methods in microprocessor verifications, data types, and tools for volume graphics and modelling excitable media including the heart.

Work on computability and physics

Since 2003, Tucker has worked with Edwin Beggs and Felix Costa on a general theory analysing the interface between algorithms and physical equipment. The theory answers various questions concerning:

  1. how algorithms can be boosted by special purpose physical devices acting as "oracles";
  2. how algorithms control physical experiments that are designed to make measurements.

By transforming the idea of oracle in computability theory, they combine algorithmic models with precisely specified models of physical processes. For example, they ask the question:

If a physical experiment were to be completely controlled by an algorithm, what effect would the algorithm have on the physical measurements made possible by the experiment?

Their central idea is that, just as Turing modelled the human computer in 1936 by a Turing machine, they model a technician, performing an experimental procedure that governs an experiment, by a Turing machine. They show that the mathematics of computation imposes fundamental limits on what can be measured in classical physics:

There is a simple Newtonian experiment to measure mass, based upon colliding particles, for which there are uncountably many masses m such that for every experimental procedure governing the equipment it is only possible to determine finitely many digits of m, even allowing arbitrary long run times for the procedure. In particular, there are uncountably many masses that cannot be measured.

References

  1. J A Bergstra and J V Tucker, Equational specifications, complete term rewriting systems, and computable and semicomputable algebras, Journal of the ACM, Volume 42 (1995), pp1194–1230.
  2. V Stoltenberg-Hansen and J V Tucker, Effective algebras, in S Abramsky, D Gabbay and T Maibaum (eds.), Handbook of Logic in Computer Science, Volume IV: Semantic Modelling, Oxford University Press (1995), pp357–526.
  3. V Stoltenberg-Hansen and J V Tucker, Computable rings and fields, in E Griffor (ed.), Handbook of Computability Theory, Elsevier (1999), pp363–447.
  4. J V Tucker and J I Zucker, Computable functions and semicomputable sets on many sorted algebras, in S Abramsky, D Gabbay and T Maibaum (eds.), Handbook of Logic in Computer Science, Volume V: Logic and Algebraic Methods, Oxford University Press (2000), pp317–523.
  5. J V Tucker and J I Zucker, Abstract computability and algebraic specification, ACM Transactions on Computational Logic, Volume 5 (2004), pp611–668.
  6. J A Bergstra, Y Hirschfeld and J V Tucker, Meadows and the equational specification of division, Theoretical Computer Science, 410 (2009), 1261–1271. doi:10.1016/j.tcs.2008.12.015
  7. E J Beggs, J F Costa, B Loff and J V Tucker, Computational complexity with experiments as oracles, Proceedings Royal Society Series A, 464 (2008) 2777–2801.
  8. E J Beggs, J F Costa, B Loff and J V Tucker, Computational complexity with experiments as oracles II: Upper bounds, Proceedings Royal Society Series A, 465 (2009) 1453–1465.
  9. E J Beggs, J F Costa and J V Tucker, Limits to measurement in experiments governed by algorithms, Mathematical Structures in Computer Science, 20 (2010) 1019–1050.
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.