Wednesday, October 31, 2012

Chomsky: Purpose of Education

PURPOSE OF EDUCATION

Well, we could ask ourselves what the purpose of an educational system is and there are sharp differences on this matter. Now, there's the traditional interpretation that comes from the Enlightenment which holds that the highest goal in life is to inquire and create, to search the riches of the past, and try to internalize the parts of them that are significant to you and carry that quest for understanding further in your own way.

The purpose of education from that point of view is just to help people determine how to learn on their own. It's you, the learner, who is going to achieve in the course of education. It's really up to you what you'll master, where you'll go, how you'll use it. How you'll go on to produce something new and exciting for yourself, maybe for others. That's one concept of education.

(2:00)Now the other concept is essentially indoctrination. People have the idea that from childhood young people have to be placed into a framework in which they'll follow orders, accept existing frameworks, and not challenge and so. And this is often quite explicit. For example, after the activism of the 1960s, there was great concern across much of the educated spectrum that young people were just getting too free and independent, that the country was becoming too democratic and so on. And in fact There is an important study on what's called the crisis of democracy--too much democracy-- arguing that there are, claiming that there are certain insitutions responsible for the indoctrination of the young--that's their phrase-- and they're not doing their job properly. That's schools, universities, churches---we have to change them so that they carry out the job of indoctrination and control more effectively.

That's actually coming from the liberal internationalists' end of the spectrum of educated opinion. In fact, since that time there have been many measures taken to try to turn the educational system towards more control, more indoctrination, more vocational training. Imposing a debt which traps students--young people--into a life of conformity and so on.

That's the exact opposite of what I referred to as the tradition that comes out of the enlightenment. There's a constant struggle between those. In the colleges and the schools, do you train for passing tests? Or do you train for creative inquiry? Pursuing interests that are aroused by material that's presented, you want to pursue either on your own or in cooperation with others.

And this goes all the way through up to graduate school and research. Just two different ways of looking at the world. When you get to, say, a research institution like the one we're now in, at the graduate level, it essentially follows the enlightment tradition. In fact, science couldn't progress unless it was based on inculcation of the urge to challenge, to question doctrine, question authority, search for alternatives, use your imagination freely under your own impulses.

Cooperative work with others is constant as you can see just by walking down the halls. That's my view of what an educational system should be like down to kindergarten. But there certainly are powerful structures in society which would prefer people to be indoctrinated, to conform, to not ask too many questions, to be obedient, to fulfill the roles that are assigned to you and not try to shake systems of power and authority.

Those are choices we have to make, either as people, wherever we stand in the educational system. As students, as teachers, as people on the outside trying to help shape it in the direction that we think it ought to go.

IMPACT OF TECHNOLOGY

Well there certainly has been a very substantial growth in new technology--technology of communication, information, access interchange. It's surely a major change in the nature of the culture and society. We should bear in mind that the technological changes that are taking place now, while they're significant, probably come nowhere near having as much impact as technological advances of, say, a century ago plus or minus.

Let's take just communication. The shift from a typewriter to a computer or a telephone to email is significant. But it doesn't begin to compare with a shift from a sailing vessel to a telegraph. The time that that cut down in communication between England and the United States was extraordinary as compared with the changes taking place now. The same is true of other kinds of technology. The introduction of plumbing, widespread plumbing in the cities had a huge effect on health, much more than the discovery of antibiotics. So the changes are real and significant, but we should recognize that others have taken place which in many ways were more dramatic.

As far as the technology itself and education is concerned, technology is basically neutral. It's kind of like a hammer. The hammer doesn't care whether you use it to build a house or whether a torturer uses it to crush somebody's skull. A hammer can do either. Same with modern technology, say, the internet, and so on. The internet is extremely valuable if you know what you're looking for. I use it all the time for research, I'm sure everyone does. If you know the kind of what you're looking for, you have a kind of framework of understanding which directs you through particular things, and lets you sideline lots of others. Then this can be a very valuable tool.

Of course, you always have to be willing to ask "Is my framework the right one?" "Maybe I have to modify it." "Maybe if there's something I look at that questions it, I should rethink how I'm looking at things." But you can't pursue any kind of inquiry without a pretty relatively clear framework that's directing your search and helping you choose what's significant and what isn't. What can be put aside, what ought to be pursued, what ought to be challenged, what ought to be developed and so on.

You can't expect somebody to become a biologist by giving them access to the Harvard University biology library and say, "Just look through it." That'll give them nothing. The internet is the same except magnified enormously. If you don't understand or know what you're looking for, if you don't have some kind of conception of what matters--always, of course, with the proviso that you're willing to question and see if it's going in the wrong direction--if you don't have that, exploring the internet is just picking out random factoids that don't mean anything. So, behind any significant use of contemporary technology--the internet, communications systems, graphics, whatever it may be--behind, unless behind it is a well constructed, directive, conceptual apparatus, it is very unlikely to be helpful.

It may turn out to be harmful. For example, random exploration through the internet turns out to be a cult generator. You pick up a factoid here, and a factoid there and somebody else reinforces it. All of sudden you have some [crazed] picture which has some factual basis but nothing to do with the world. You have to know how to evaluate, interpret, and understand. Say biology again. The person who wins the Nobel prize in biology is not the person who read the most journal articles and took the most notes on them. It's the person who knew what to look for. And cultivating that capacity to seek what's significant--always willing to question whether you're on the right track--that's what education is going to be about, Whether it's using computers and the internet or pencil and paper and books.

(11:48) COST OR INVESTMENT

Well, education is discussed in terms of whether it's a worthwhile investment. Does it create human capital that can be used for economic growth and so on. And it's a very strange, kind of, very distorting way to even pose the question, I think.

Do we want to have a society of free, creative, independent individuals, able to appreciate and to gain from the cultural achievements of the past, and to add to them? Do we want that? Or do we want people who can increase GDP? It's not necessarily the same, they're not the same thing. An education of the kind that, say, Bertrand Russell, John Dewey and others talked about, that's a value in itself. Whatever impact it has in the society, it's a value because it helps create better human beings. After all, that's what an educational system should be for.

On the other hand, if you want to look at it in terms of costs and benefits, take the new technology that we were just talking about, where did that come from? Well, actually a lot of it was developed right where we're sitting, down below where we now are was a major laboratory back in the 1950s, where i was employed in fact. Which had lots of scientists, engineers, people of all kinds of interests--philosophers, others. Who were working on developing the basic character and even the basic tools of the technology that has now come.

Computers and the internet for example, were pretty much in the public sector for decades, funded in places like this, where people were exploring new possibilities that were mostly unthought of, unheard of at the time. Some of them worked, some didn't. The ones that worked were finally converted into tools that people could use. Now that's the way scientific progress takes place. It's the way that cultural progress takes place generally.

(14:24) Classical artists, for example, came out of a tradition of craftsmanship that was developed over long periods with master artisans, with others. Sometimes you can rise on their shoulders and create new, marvelous things. But it doesn't come from nowhere. If there isn't a lively cultural and educational system which is geared towards encouraging creative exploration, independence of thought, willingness to challenge, cross frontiers, to challenge accepted beliefs and so on. If you don't have that, you're not going to get the technology that can lead to economic gains. Though that I don't think is the prime purpose of cultural enrichment in education as a part of it.

(15:44) ASSESSMENT VS AUTONOMY

There is, in the recent period particularly, an increasing shaping of education from the early ages on towards passing examinations. Taking tests can be of some use, both for the person who is taking the test--seeing what I know, and where I am, and what I would achieve, what [I'm having]--and for instructors--what should be changed and improved in developing the course of instruction.

But beyond that, they don't really tell you very much. I mean I know for many many years, I've been on admissions committees for entry into advanced graduate programs--maybe one of the most advanced anywhere--and we of course pay some attention to test results, but really not too much. A person can do magnificently on every test and understand very little. All of us who've been through schools and colleges and universities are very familiar with this.

You can be assigned, you can be in some, say, course that you have no interest in and there's demand that you passes a test, and you can study hard for the test, and you can ace it. And a couple of weeks later you've forgotten what the topic was. I'm sure we've all had that experience. I know I have. It can be a useful device if it contributes to the constructive purposes of education.

If it's just a set of hurdles you have to cross, it can turn out to be not only meaningless, but it can divert you away from things that you want to be doing. I see this regularly when I talk to teachers. If you want an experience from a couple of weeks ago--I happened to be talking to a group which included many schoolteachers. One of them was a sixth grade teacher who teaches kids, I guess, ten or eleven or twelve, something like that. She came up to me afterwards and I'd been talking about these things and she told me of an experience that she had just had in her class. After class a little girl came up to her and said she was really interested in something that came up and she asked if the teacher could give her some ideas of how she could look into it further. And the teacher was compelled to tell her, "I'm sorry, but you can't do that, you have to study to pass this national exam that's coming, that's going to determine your future." The teacher didn't say it, "but it's going to determine my future, whether I am rehired."

The system is geared to getting the children to pass hurdles, but not to learn and understand and explore. That child would have been better off if she had been allowed to explore what she was interested in and maybe not do so well on the test about things she wasn't interested in. They'll come along when they fit into her interests and concerns.

So, I don't say that tests should be eliminated, they can be a useful educational tool. But ancillary--something that is helping improve ourselves, for instructors, and others, what we're doing--tell us where we are. Passing tests doesn't begin to compare with searching and inquiring into pursuing topics that engage us and excite us. That's far more significant than passing tests. In fact, if that's the kind of educational career that you're given the opportunity to pursue, you'll remember what you discovered.

There is a famous physicist, a world famous physicist right here at MIT who was teaching freshman courses. He once said that in his freshman course, students will ask, "What are we going to cover this semester?" And his standard answer was, "It doesn't matter what we cover, it matters what you discover."

That's right. Teaching ought to be inspiring students to discover on their own. To challenge if they don't agree. To look for alternatives if they think there are better ones. To work through the great achievements of the past and try to master them on their own because they're interested in them. If that's the way teaching is done, students will really gain from it and will not only remember what they've studied but be able to use it as a basis for going on on their own.

And again, education is really aimed at just helping students get to the point where they can learn on their own. Because that's what you're going to do for you life. Not just absorb materials that are given to you from the outside and repeat it.
LWF | The Purpose of Education | Español | ut:ws:ws

Tuesday, October 30, 2012

Shadow Dancing

You got me lookin' at that Heaven in your eyes
I was chasing your direction, I was tellin' you no lies
And I was loving you when the words you said
Baby, I lose my head

And in a world of people there's only you and I
There ain't nothing come between us in the end
How can I hold you when you ain't even mine?
Only you can see me through, I leave it up to you

Do it light, taking me through the night
Shadow dancing, baby, you do it right
Give me more, drag me across the floor
Shadow dancing, all this and nothin' more

All that I need is just one moment in your arms
I was chasing your affection, I was doing you no harm
And I was loving you, make it shine, make it rain
Baby, I know my way

I need that sweet sensation of living in your love
I can't breathe when you're away, it pulls me down
You are the question and the answer am I
Only you can see me through, I leave it up to you

Do it light, taking me through the night
Shadow dancing, baby, you do it right
Give me more, drag me across the floor
Shadow dancing, all this and nothin' more

And in this world of people there's only you and I
There ain't nothing come between us in the end
How can I hold you when you ain't even mine?
Only you can see me through, I leave it up to you

Do it light, taking me through the night
Shadow dancing, baby, you do it right
Give me more, drag me across the floor
Shadow dancing, all this and nothing more

Do it light, taking me through the night
Shadow dancing, baby, you do it right
Give me more, drag me across the floor
Shadow dancing, all this and nothing more

Do it light, taking me through the night
Shadow dancing, baby, you do it right
Give me more, drag me across the floor
Shadow dancing, all this and nothing more

Do it light, taking me through the night
Shadow dancing, baby, you do it right
Give me more, drag me across the floor
Shadow dancing, all this and nothing more

Do it light, taking me through the night
Shadow dancing, baby, you do it right
Give me more, drag me across the floor
Shadow dancing, all this and nothing more
Andy Gibb: Shadow Dancing | ws

Friday, October 26, 2012

Edge: What is Life?

WHAT IS LIFE? A 21st CENTURY PERSPECTIVE

J. CRAIG VENTER: I was asked earlier whether the goal is to dissect what Schrödinger had spoken and written, or to present the new summary, and I always like to be forward-looking, so I won't give you a history lesson except for very briefly. I will present our findings on first on reading the genetic code, and then learning to synthesize and write the genetic code, and as many of you know, we synthesized an entire genome, booted it up to create an entirely new synthetic cell where every protein in the cell was based on the synthetic DNA code.

As you all know, Schrödinger's book was published in 1944 and it was based on a series of three lectures here, starting in February of 1943. And he had to repeat the lectures, I read, on the following Monday because the room on the other side of campus was too small, and I understand people were turned away tonight, but we're grateful for Internet streaming, so I don't have to do this twice.  

Also, due clearly to his historical role, and it's interesting to be sharing this event with Jim Watson, who I've known and had multiple interactions with over the last 25 years, including most recently sharing the Double Helix Prize for Human Genome Sequencing with him from Cold Spring Harbor Laboratory a few years ago.

Schrödinger started his lecture with a key question and an interesting insight on it. The question was "How can the events in space and time, which take place within the boundaries of a living organism be accounted for by physics and chemistry?" It's a pretty straightforward, simple question. Then he   answered what he could at the time, "The obvious inability of present-day physics and chemistry to account for such events is no reason at all for doubting that they will be accounted for by those sciences." While I only have around 40 minutes, not three lectures, I hope to convince you that there has been substantial progress in the last nearly 70 years since Schrödinger initially asked that question, to the point where the answer is at least nearly at hand, if not in hand.

I view that we're now in what I'm calling "The Digital Age of Biology". My teams work on synthesizing genomes based on digital code in the computer, and four bottles of chemicals illustrates the ultimate link between the computer code and the digital code.

Life is code, as you heard in the introduction, was very clearly articulated by Schrodinger as code script. Perhaps even more importantly, and something I missed on the first few readings of his book earlier in my career, was as far as I could tell, it's the first mention that this code could be as simple as a binary code. And he used the example of how the Morse code with just dots and dashes, could be sufficient to give 34 different specifications. I've searched and I have not found any earlier references to the Morse code, although an historian that I know wrote Crick a letter asking about that, and Crick's response was, "It was a metaphor that was obvious to everybody." I don't know if it was obvious to everybody after Schrodinger's book, or some time before.

One of the things, though, Schrodinger was right about a lot of things, which is why, in fact, we celebrate what he talked about and what he wrote about, but some things he was clearly wrong about, like most scientists in his time, he relied on the biologist of the day. They thought that protein, not DNA was the genetic information. It's really quite extraordinary because just in 1944 in the same year that he published his book is when the famous experiment by Oswald Avery, who was 65 and about ready to retire, along with this colleagues, Colin MacLeod and Maclyn McCarty, published their key paper demonstrating that DNA was  the substance that causes bacterial transformation, and therefore was the genetic material.

This experiment was remarkably simple, and I wonder why it wasn't done 50 years earlier with all the wonderful genetics work going on with drosophila, and chromosomes. Avery simply used proteolytic enzymes to destroy all the proteins associated with the DNA, and showed that the DNA, the naked DNA was, in fact, a transforming factor. The impact of this paper was far from instantaneous, as has happened in this field, in part because there was so much bias against DNA and for its proteins that it took a long time for them to sink in.

[Enlarge]

In 1949, was the first sequencing of a protein by Fred Sanger, and the protein was insulin. This work showed, in fact, that proteins consisted of linear amino acid codes. Sanger won the Nobel Prize in 1958 for his achievement. The sequence of insulin was very key in terms of leading to understanding the link between DNA and proteins. But as you heard in the introduction, obviously the discovery that changed the whole field and started us down the DNA route was the 1953 work by Watson and Crick with help from Maurice Wilkins and Rosalind Franklin showing that DNA was, in fact, a double helix which provided a clear explanation of how DNA could be self-replicated. Again, this was not, as I understand, instantly perceived as a breakthrough because of the bias of biochemists who were still hanging on to proteins as the genetic material. But soon with a few more experiments from others, the world began to change pretty dramatically.

The next big thing came from the work of Gobind Khorana and Marshall Nirenberg in 1961 where they worked out the triplet genetic code. It's three letters of genetic code, coding for each amino acid. With this breakthrough it became clear how the linear DNA code had coded for the linear protein code. This was followed a few years later by Robert Holley's discovery of the structure of tRNA, and tRNA is the key link between the messenger RNA and bringing in the amino acids in for  protein synthesis. Holley, Nirenberg and Khorana shared the Nobel Prize in 1968 for their work.


[Enlarge]


[Enlarge]

The next decade brought us restriction enzymes from my friend and colleague, Ham Smith, who is at the Venter Institute now; in 1970 discovered the first restriction enzymes. These are the molecular scissors that cut DNA very precisely, and enabled the entire field of molecular engineering and molecular biology. Ham Smith, and Warner Arber, and Dan Nathans shared the Nobel prize in 1978 for their work. The '70s brought not only some interesting dress codes and characters, (Laughter) but the beginning of the molecular splicing revolution, using restriction enzymes, Cohen and Boyer, and Paul Berg all published the first papers on recombinant DNA, and Cohen and Boyer at Stanford filed a patent on their work, and this was used by Genentech and Eli Lilly to produce a human insulin as the first recombinant drug.

DNA sequencing and reading the genetic code progressed much more slowly. In 1973 Maxam and Gilbert published a paper on only 24 base pairs, 24 letters of genetic code. RNA sequencing progressed a little bit faster, so the first actual genome, a viral genome, an RNA viral genome was sequenced in 1976 by Walter Fiers from Belgium. This was followed by Fred Sanger's sequencing the first DNA virus, Phi X 174. This became the first viral DNA genome, and it was also accompanied by a new DNA sequencing technique of dideoxy DNA sequencing, now referred to as "Sanger sequencing" that Sanger introduced.

This is a picture of Sanger's team that sequenced Phi X 174. The second guy from the left on the bottom is Clyde Hutchison, who is also a member of the Venter Institute, and joined us after retiring from the University of North Carolina, and played a key role in some of the synthetic genome work.

In 1975 I was just getting my PhD as the first genes were being sequenced. Twenty years later I led the team to sequence the first genome of living species, and Ham Smith was part of that team. This was Haemophilus influenzae. Instead of 5,000 letters of genetic code, this was 1.8 million letters of genetic code. Or about 300 times the size of the Phi X genome. Five years later, we upped the ante another 1,600 times with the first draft of the human genome using our whole genome shotgun technique.


I view DNA as an analogue coding molecule, and when we sequence the DNA, we are converting that analogue code into digital code; the 1s and 0s in the computer are very similar to the dots and dashes of Schrodinger's metaphor. I call this process "digitizing biology".


The human genome is about a half a million times larger the Phi X genome, so it shows how fast things were developing. Reading genomes has now progressed extremely rapidly from requiring years or decades, it now takes about two hours to sequence a human genome. Instead of genomes per day or genomes per hour, or hours per genome, we can now and have recently done a demonstration sequencing 2,000 complete microbial genomes in one machine run. The pace is changing quite substantially.

I view DNA as an analogue coding molecule, and when we sequence the DNA, we are converting that analogue code into digital code; the 1s and 0s in the computer are very similar to the dots and dashes of Schrodinger's metaphor. I call this process "digitizing biology".

[Enlarge]

Numerous scientists have drawn the analogy between computers and biology. I take these even further. I describe DNA as the software of life and when we activate a synthetic genome in a recipient cell I describe it as booting up a genome, the same way we talk about booting up a software in a computer.

June 23rd of this year would have been Alan Turing's 100th birthday. Turing described what has become to be known as Turing Machines. The machine described a set of instructions written on a tape. He also described the Universal Turing Machine, which was a machine that could take that set of instructions and rewrite them, and this was the original version of the digital computer. His ideas were carried further in the 1940s by John von Neumann, and as many people know he conceived of the self-replicating machine. Von Neumann's machine consisted of a series of cells that uncovered a sequence of actions to be performed by the machine, and using the writing head, the machine can print out a new pattern of cells, allowing it to make a complete copy of itself on the tape. Many scientists have made the obvious analogy between Turing machines and biology. The latest was most recently in nature by Sydney Brenner who played a role in almost all the early stages of molecular biology. Brenner wrote an article about Turing and biology, and in this he argued that the best examples of Turing and von Neumann machines are from biology with the self-replicating code, the internal description of itself, and how this is the key kernel of biological theory.

While software was pouring out of sequencing machines around the world, substantial progress was going on describing the hardware of life, or proteins. In biochemistry the first two decades of the 20th century was dominated by what was called "The colloid theory". Life itself was explained in terms of the aggregate properties of all the colloidal substances in an organism. We now know that the substances are a collection of three-dimensional protein machines. Each evolved to carry out a very specific task.

Now, these proteins have been described as nature's robots. If you think about it for every single task in the cell, every imaginable task as described by Tanford and Reynolds, "there is a unique protein to carry out that task. It's programmed when to go on, when to go off. It does this based on its structure. It doesn't have consciousness; it doesn't have a control from the mind or higher center. Everything a protein does is built into its linear code, derived from the DNA code".

[Enlarge]

There are multiple protein types, everything you know about your own life, and most of it is protein-derived. About a quarter of your body is collagen, it's a matrix protein just built up of multiple layers. We have rubber-like proteins that form blood vessels as well as the lung tissue, and we have transporters that move things in and out of cells, and enzymes that copy DNA, metabolize sugars, et cetera.

[Movie]

The most important breakthroughs, outside of the genetic code, was in determining the process of protein synthesis.  To show you how recent all of this is, this is the three-dimensional structure of the bacterial ribosome determined in 2005, and the three-angstrom structure of the eukaryotic chromosome, which was just determined and published in December of last year. These ribosomes are extraordinary molecules. They are the most complex machinery we have in the cell, as you can see there are numerous components to it. I try to think of this as maybe the Ferrari engine of the cell. If the engine can't convert the messenger RNA tape into proteins, there is no life. If you interfere with that process, you kill life. It's the major antibiotics that we all know about, the amino glycosides, tetracycline, chloramphenicol, erythromycin, et cetera, all kill bacterial cells by interfering with the function of the ribosome.

The ribosome is clearly the most unique and special structure in the cell. It has seven major RNA chains, including three tRNA chains and one messenger RNA. It has 47 different proteins going into the structure and one newly synthesized protein chain, and a size of several million Daltons. This is the heart of all biology. We would not have cells, we would not have life without this machine that converts the linear DNA code into proteins working.

The process of converting the DNA code into protein starts with the synthesis of mRNA from DNA, called "transcription", and protein synthesis from the mRNA is called "translation". If these processes were highly reliable, life would be very different, and perhaps we would not need the same kind of information-driven system. If you were building a factory to build automobiles that worked the way the ribosome did, you would be out of business very quickly. A significant fraction of all the proteins synthesized, are degraded shortly after synthesis, because they formed the wrong confirmations and they aggregate in the cell, or cause some other problem.

Transfer RNA brings in the final amino acid to a growing peptide chain coming out of the ribosome. The next step is truly one of the most remarkable in nature, and that's the self-folding of the proteins. The number of potential protein confirmations is enormous: if you have 100 amino acids in a protein then there are on the order of 2 to the 100th power different conformations, and it would take about ten to the tenth years to try each conformation. But built into the linear protein code with each amino acid are the folding instructions in turn determined by the linear genetic code. As a result these processes happen very quickly.

[Movie]

Here's a movie that spreads out 6 microseconds of protein folding over several seconds, to show you folding of a small protein. This is the end folded structure that starts with a linear protein, and over 6 microseconds it goes through all the different confirmations to try and get to the final fold.

Somehow the linear amino acid code limits the number of possible folds it can take, but each protein tries a large number of different ones, and if it gets them wrong in the end, the protein has to be degraded very quickly or it will cause problems. Imagine all the evolutionary selection that went into these processes, because the protein structure determines its rate of folding, as well as the final structure and hence its function. In fact, the end terminal amino acid determines how fast a protein is degraded. This is now called "The "N" rule pathway for protein degradation". For example, if you had the amino acid, lysine, or arginine or tryptophan as the N terminal on a protein beta-galactosidase, it results in a protein with a half-life of 120 seconds in E. coli, or 180 seconds in a eukaryotic cell, yeast. Whereas you have three different amino acids, serine, valine or methionine, you get a half-life of over ten hours of in bacteria, over 30 hours in yeast.

Because of the instability, aggregation and turnover of proteins in a cell one of the most important pathways in any cell is the proteolytic pathway.  Degradation of proteins and protein fragments is of vital importance as they can be highly toxic to the cell by forming intracellular aggregates.

A bacterial cell in an hour or less will have to remake of all its proteins. Our cells make proteins at a similar rate, but because of protein instability and the random folding and misfolding of proteins, protein aggregation is a key problem. We have to constantly synthesize new proteins, and if they fold wrong, you have to get rid of them or they clog up the cell, the same way as if you stop taking out the trash in a city everything comes to a halt. Cells work the same way. You have to degrade the proteins; you have to pump the trash out of the cell. Miss folded proteins can become toxic very quickly. There are a number of diseases known to most of you that are due to misfolding or aggregation, Alzheimer's and mad cow disease are examples of diseases caused by the accumulation of toxic protein aggregates.


Life is a process of dynamic renewal. We're all shedding about 500 million skin cells every day. That is the dust that accumulates in your home; that's you.  You shed your entire outer layer of skin every two to four weeks. You have five times ten to the 11th blood cells that die every day. If you're not constantly synthesizing new cells, you die.


Several human diseases arise from protein misfolding leaving too little of the normal protein to do its job properly. The most common hereditary disease of this type is cystic fibrosis. Recent research has clearly shown that the many, previously mysterious symptoms of this disorder all derive from lack of a protein that regulates the transport of the chloride ion across the cell membrane. More recently scientists have shown that by far the most common mutation underlying cystic fibrosis hinders the dissociation of the transport regulator protein from one of its chaperones. Thus, the final steps in normal folding cannot occur, and normal amounts of active protein are not produced.

I'm trying to leave you with a notion that life is a process of dynamic renewal. We're all shedding about 500 million skin cells every day. That is the dust that accumulates in your home; that's you.  You shed your entire outer layer of skin every two to four weeks. You have five times ten to the 11th blood cells that die every day. If you're not constantly synthesizing new cells, you die.  During normal organ development about half of all of our cells die. Everything in life is constantly turning over and being renewed by rereading the DNA software and making new proteins.

Life is a process of dynamic renewal. Without our DNA, without the software of life cells die very rapidly.  Rapid protein turnover is not just an issue for bacterial cells; our 100 trillion human cells are constantly reading the genetic code and producing proteins.  A recent study assaying 100 proteins in living human cancer cells showed half-lives that ranged between 45 minutes and 22.5 hours.

[Enlarge]

Now, as you know, all life is cellular, and the cellular theory of life is that you can only get life from preexisting cells, and all kinds of special vitalistic parameters have been attributed to cells over time. This slide shows what an artist's view of what a cell cytoplasm might look like. It's quite a crowded place, it's relatively viscous, it's not this empty bag with a few proteins floating around in it, and it's a very unique environment. About one quarter of the proteins are in solid phase.

A key tenant of chemistry is the notion of "synthesis as proof". This perhaps dates back to 1828 when Friedrich Wöhler synthesized urea. The Wöhler synthesis is of great historical significance because for the first time an organic compound was produced from inorganic reactants.  Wöhler synthesis of urea was heralded as the end of vitalism because vitalists thought that you could only get organic molecules from living entities. Today there are tens of thousands of scientific papers published that either have "proof by synthesis" as the starting point or as a key part of the title.

We decided to take the approach of proof by synthesis that DNA codes for everything a cell produces and does. Back in 1995 when we sequenced the first genome, we sequenced a second genome that year.  For comparative purposes we were looking for the cell with the smallest genome, and we chose mycoplasma genitalium. It has 482 protein-coding genes, and 43 RNA genes. We asked simple questions, how many of these genes are essential for life?; what's the smallest number of cells needed for a cellular machinery? After extensive experimentation we ultimately decided that the only way to answer these questions would be to design and construct a minimal DNA genome of the cell.  As soon as we started down that route, we had new questions. Would chemistry even allow us to synthesize a bacterial chromosome? And if we could, would we just have a large piece of DNA, or could we  boot it up in the cell like a new chemical piece of software?

We decided to start where DNA history started, with Phi X 174. We chose it as a test synthesis because you can make very few changes in the genetic code of Phi X without destroying viral particles. Clyde Hutchison, who helped sequence Phi X in Sanger's lab, Ham Smith and I developed a new series of techniques to correct the errors that take place when you synthesize DNA. The machines that synthesize DNA are not particularly accurate; the longer the piece of DNA you make, the more spelling errors. We had to find ways to correct errors.

[Enlarge]

Starting with the digital code we synthesized DNA fragments and assembled the genome. We corrected the errors and in the end had a 5,386-basepair piece of DNA that we inserted into E. coli, and this is the actual photo of what happened. The E. coli recognized the synthetic piece of DNA as normal DNA, and the proteins, being robots, just started reading the synthetic genetic code, because that's what they're programmed to do. They made what the DNA code told them to do, to make the viral proteins. The virus proteins self-assembled and formed a functional virus. The virus showed its gratitude by killing the cells, which is how we effectively get these clear plaques in a lawn of bacterial cells. I call this a situation where the "software is building its own hardware". All we did was put a piece of DNA software in the cell, and we got out a protein virus with a DNA core.

Now, our goal is not to make viruses, we wanted to make something substantially larger. We wanted to make an entire chromosome of a living cell. But we thought if we could make viral-sized chromosomes accurately, maybe we could make 100 or so of those and find a way to put them together.  That's what we did.

The teams starting with synthetic segments the size of Phi-X, we sequentially assembled larger and larger segments.  At each stage we  sequenced  verified them before going on to the next assembly stage. We put four pieces together creating segments that were 24,000 letters long. We would clone the segments in E. coli, sequence them, and assemble three together, to get pieces that were 72,000 base pairs. This was a very laborious process that took about a year and a half to do.

We put two of the 72,000bp segments together to obtain new segments representing one quarter of the genome each at 144,000 letters in length. This was way beyond the largest piece that had ever been synthesized by other of only 30,000 base pairs. E. coli didn't like these large pieces of synthetic DNA in them, so we switched to yeast. I found out last night this is a city that loves beer that is produced from the same brewer's yeast.  Aside from fermentation, this little cell has remarkable properties of assembling DNA.

[Enlarge]

All we had to do was put the four synthetic quarter molecules into yeast with a small synthetic yeast centromere, and yeast automatically assembled these pieces together. That gave us the first synthetic bacterial chromosome, and this is what we published in 2008. This was the largest chemical of a defined structure ever synthesized.

We continued to work on DNA synthesis, and somebody who started out as a young post-doc, Dan Gibson, came up with a substantial breakthrough. Instead of the hours, to days, to years, he found out that by putting three enzymes together with all the DNA fragments in one tube at 50 degrees centigrade for a little while, they would automatically assemble these pieces. DNA assembly went from days down to an hour. This was a breakthrough for a number of reasons. Most importantly, it allows us now to automate genome synthesis. Having a simple one-step method allows us to go from the digital code in the computer to the analogue code of DNA in a robotic fashion. This means scaleing up substantially. We proved this initially by just one step synthesizing the mouse mitochondrial genome.

I had two teams working, one on the chemistry and one on the biology. It turns out the biology ended up being more difficult than the chemistry. How do you boot up a synthetic chromosome in a cell? This took substantial time to work out, and this paper that we published in 2007 is one of the most important for understanding how cells work and what the future of this field brings.

This paper is where we describe genome transplantation, and how by simply changing the genetic code, the chromosome, in one cell, swapping it out for another, we converted one species into another. Because this is so important to the theme of what we're doing, I'm going to walk you through this a little bit.  And by the way, these are two of the scientists pictured here that led this effort, Carole Lartigue and John Glass, and the team working with them.


[Enlarge]


[Enlarge]

We started by isolating the chromosome from a cell called M. mycoides. Chromosomes are enshrouded with proteins, which is why there was confusion for so many years over whether the proteins or the DNA was the genetic material. We simply did what Avery did, we treated the DNA with proteolytic enzymes, removing all the proteins because if we're making a synthetic chromosome, we need to know can naked DNA work on its own, or are there going to be some special proteins needed for transplantation? We added a couple of gene cassettes to the chromosome, one so we could select for it, and another so it turns cells bright blue if it gets activated. After considerable effort we found a way to transplant this genome into a recipient cell, a cell M. capricolum, which is about the same distance apart genetically from M. mycoides as we are from mice. So relatively close, on the order of 10 percent or more different.


Life is based on DNA software. We're a DNA software system, you change the DNA software, and you change the species. It's a remarkably simple concept, remarkably complex in its execution.


[Enlarge]

Let me show you what happened with this very sophisticated movie. We inserted the M. mycoides chromosome into the recipient cell. Just as with the Phi X, as soon as we put this DNA into this cell, the protein robots started producing mRNA, started producing proteins. Some of the early proteins produced were the restriction enzymes that Ham Smith discovered in 1970, we think that they recognized the initial chromosome in the cell as foreign DNA and chewed it up.  Now we have the body and all the proteins of one species, and the genetic software of another. What happened?

[Enlarge]

In a very short period of time we have these bright blue cells. When we interrogated the cells, they had only the transplanted genome, but more importantly, when we sequenced the proteins in these cells, there wasn't a single protein or other molecule from the original species. Every protein in the cell came from the new DNA that we inserted into the cell. Life is based on DNA software. We're a DNA software system, you change the DNA software, and you change the species. It's a remarkably simple concept, remarkably complex in its execution.

Now, we had a problem, that some of you may have picked up on or have read about. We were assembling the bacterial chromosome in a eukaryotic cell. If we're going to take the synthetic genome and do the transplantations, we had to find a way to get the genome out of yeast to transplant it back into the bacterial cell. We developed a whole new way to grow bacteria chromosomes in yeast as eukaryotic chromosomes. It was remarkably simple in the end. All we do is add a very small synthetic centromere from  yeast to the bacterial chromosome, and all of a sudden it turns into a stable eukaryotic chromosome.


[Enlarge]


[Enlarge]

Now we can stabley grow bacterial chromosomes in yeast. We had the situation where we had the M. mycoides chromosome in the eukaryotic cell, we could try isolating it and doing a transplantation. The trouble is, it didn't work. This little problem took us two and a half years to solve of why it didn't work. It turns out when we initially did the transplantations taking the chromosome out of the M. mycoides cell, that DNA had been methylated, and that's how cells protect their own DNA from interloping species. We proved this by isolating the six methylases, and methylating the DNA when we took it out of yeast. If we methylated the DNA, we could then do the transplantation. We proved this ultimately by in the recipient cell removing the restriction enzyme system, and in that case we can just transplant the naked unmethylated DNA because there's nothing to destroy the DNA in the cell.

[Enlarge]

We were now at the point where we thought we had solved all the problems. We could create the new bacterial strains from the bacterial genomes cloned in yeast. We had this new cycle, we could work our way around the circle, we could add a centromere to the bacterial chromosome and turn it into a eukaryotic chromosome. The advantages, for those of you who work with bacteria, most bacteria do not have genetic systems, which is why most scientists don't work with them. As soon as you put that bacterial genome in yeast, we have the complete repertoire of genetic tools available in yeast such as homologous recombination. We can make rapid changes in the genome, isolate the chromosome, methylate it if necessary, and do a transplantation to create a highly modified cell.

With all the new techniques and success we decided to synthesize the much larger M. mycoides genome. Dan Gibson led the effort that started with pieces that were 1,000 letters long. We put ten of those together to make these over 10,000 letters long. We put ten of those together to make pieces now that are 100,000 letters long, and we had eleven 100,000 base pair pieces. We put them in yeast, which assembled the genome. We knew how to transplant it out of yeast and we did the transplantation and it didn't work.

Those of you who are software engineers know that software engineers have debugging software to tell them where the problems are in their code. So we had to develop the biological version of debugging software, which was basically substituting natural pieces of DNA for the synthetic ones so we could find out what was wrong. We found out we could have 10 of the 11 synthetic pieces, and the last piece had to be the native genome DNA to get a living cell. We re-sequenced our synthetic segment and found one letter wrong in an essential gene that made the difference between life and no life. The deletion was in the DNAa gene, which is an essential gene for life. We corrected that error, the one error out of 1.1 million, and we got the first actual synthetic cell from the genome transplants.

[Enlarge]

One of the ways that we knew that what we had was a synthetic cell was by watermarking the DNA so we could always tell our synthetic species from any naturally occurring one.  Now about the watermarks, when we watermarked the first genome we just used the single letter amino acid code to write the authors names in the DNA. We were accused of not having much of an imagination. For this new genome we went a little bit farther by adding three quotations from the literature. But first the team developed a whole new code where by we could write the English language complete with numbers and punctuation in DNA code. It was quite interesting. We sent the paper to Science for a review, and one of the reviewers sent back their review written in DNA code, much to the frustration of the Science editor, who could not decipher it. (Laughter) But the reviewer's DNA code was based on the ASCII code, and with biology that creates a problem because you can get long stretches of new without a stop codon. We developed this new code that puts in very frequent stop codons, because the last thing you want to do is put in a quote from James Joyce and have it turn into a new toxin that kills the cell or kills you. You didn't know poetry could do that, I guess.

We built in the names of the 46 scientists that contributed to the effort, and also there was a message with an URL. So being the first species to have the computer as a parent, we thought it was appropriate it should have its own Web addressed built into the genome. As people solved this code, they would send an e-mail to the Web address written in the genome. Once numerous people solved it, we made this available.

The three quotes are the first one, and the probably most important one to this country is James Joyce, "To live, to err, to fall, to triumph, to recreate life out of life." Somehow that seemed highly appropriate. The second is from Oppenheimer's biography, "American Prometheus". "See things not as they are, but as they might be." In the third, from Richard Feynman, "What I cannot build, I cannot understand."

Everybody thought this was very cool until a few months after this appeared, we got a letter from James Joyce's estate attorney saying, "Did you seek permission to use this quotation?" In the US, at least, there are fair use laws that allow you to quote up to a paragraph without seeking permission. We sort of dismissed that one, and James Joyce was dead, and we didn't know how to ask him anyway.

Then we started getting an e-mail trail from a Caltech scientist saying we misquoted Richard Feynman. But if you look on the Internet, this is the quotation that you find everywhere. We argued back this is what we found, and this is what was in his biography. So to prove his point, he sent a picture of Feynman's blackboard with the original quotation, and it was, "What I cannot create, I do not understand." I think it's a much better quotation, and we've gone back to correct the DNA code so that Feynman can rest much more peacefully.

[Enlarge]


All living cells that we know of on this planet are DNA software driven biological machines comprised of hundreds to thousands of protein robots coded for by the DNA software.  The protein robots carry out precise biochemical functions developed by billions of years of evolutionary software changes.


Science can go much further now, and there is an exciting paper out of Stanford with a team led by Markus Covert and that included John Glass from my institute using the work on the mycoplasma cell to do the first complete mathematical modeling of a cell. But this is coming out in Cell next week. It's going to be an exciting paper. We can go from the digital code to the genetic code, and now modeling the entire function of the cell in a computer, going the complete digital circle. We are going even further now, by using computer software to design new DNA software to create a new synthetic life.

I hope it is becoming clear that all living cells that we know of on this planet are DNA software driven biological machines comprised of hundreds to thousands of protein robots coded for by the DNA software.  The protein robots carry out precise biochemical functions developed by billions of years of evolutionary software changes.  The software codes for the linear protein sequence, which in turn determines the rate of folding as well as the final 3-dimensional structure and function of the protein robot.  The primary sequence determines the stability of the protein and therefore its dynamic regulation in the cell.  By making a copy of the DNA software, cells have the ability to self-replicate.  All these processes require energy.  From all the genomes we have sequenced we have seen that there is a range of mechanisms for the generation of cellular energy molecules through a process we call metabolism.  Some cells are able to transport sugars across the membrane into the cell and by some now well defined enzymatic processes capture the chemical energy in the sugar molecule and to supply it to the required cellular processes.  Other cells such as the autotroph Methanococcus jannaschii use only inorganic chemicals to make every molecule in the cell while providing the cellular energy.  These cells do this by a series of proteins that convert carbon dioxide into methane to generate cellular energy molecules and to provide the carbon to make proteins. These processes are all coded for in the genetic code.

Schrödinger citing the second law of thermodynamics (entropy principal)-the natural tendency of things to go over into disorder, described his notion of "order based on order". We have now shown using synthetic DNA genomes that when you put new DNA software into the cell the protein robots coded for are produced, changing the cellular phenotype.  When you change the DNA software you change the species. This is consistent with Schrodinger's Code-script and "An organism's astonishing gift of concentrating a 'stream of order' on itself …"

We can digitize life, and we generate life from the digital world. Just as the ribosome can convert the analogue message in mRNA into a protein robot, it's becoming standard now in the world of science to convert digital code into protein viruses and cells. Scientists send digital code to each other instead of sending genes or proteins. There are several companies around the world that make their living by synthesizing genes for scientific labs. It's faster and cheaper to synthesize a gene than it is to clone it, or even get it by Federal Express.

As an example BARDA in the US government sends us as a part of our synthetic genomic flu virus program with Novartis, an email with a test pandemic flu virus sequence. We convert the digital sequence into a flu virus genome in less than 12 hours. We are in the process of building a simple smaller faster converter device, "a digital to biological converter", that in a fashion similar to the telephone where digital information is converted to sound; we can send digital DNA code at the close to the speed of light and convert the digital information into proteins, viruses and living cells. With a new flu pandemic we could digitally distribute a new vaccine in seconds around the world, perhaps even to each home in the future. 

Currently all life is derived from other cellular life including our synthetic cell.  This will change in the near future with the discovery of the right cocktail of enzymes, ribosomes, and chemicals including lipids together with the synthetic genome to create new cells and life forms without a prior cellular history. Look at the tremendous progress in the 70 years since Schrodinger's lecture on this campus.  Try to imagine 70 years from now in the year 2082 what will be happening.  With the success of private space flight, the moon and Mars will be clearly colonized.  New life forms for food or energy production or for new medicines will be sent as digital information to be converted back into life forms in the 4.3 to 21 minutes that it takes for a digital wave to go from earth to Mars. 

I suggested in place of sending living humans to distant galaxies that we can send digital information together with the means to boot it up in tiny space vessels.  More importantly and as I will speak to on Saturday evening synthetic life will enable us to understand all life on this planet and to enable new industries to produce food, energy, water and medicine as we add 1 billion new humans to earth every 12 years.

Schrodinger's "What is Life?" helped to stimulate Jim Watson and Francis Crick to help kick off this new era of DNA science.  One can only hope that the newest frontier of synthetic life will have a similar impact on the future.


REMARKS BY JAMES D. WATSON

JAMES WATSON: In 1963, which was 10 years after the Double Helix, I began putting together a book which became the Molecular Biology of the Gene. It was before we knew the complete code, it was after what Nirenberg had shown and so I thought we knew the general principles. I thought initially the title we would use for the book was This Is Life, but I thought that will be controversial because I hadn't explained everything, so it just became The Molecular Biology of the Gene.


I want to congratulate Craig on a very beautiful lecture.


Certainly Craig's talk is …it's much more beautiful 60 years later. And everything. I think chemistry is a good thing. I think our finding the DNA structure was unusual in that Crick or I, neither of us knew any chemistry. Luckily there was a chemist in the room, and helped. But I think we're in this era of beautiful high technology. Sentimentally, I hope there's still a role for the biologist.

Time will tell, but I want to congratulate Craig on a very beautiful lecture.

Edge | A Talk With Craig Venter | WHAT IS LIFE? A 21st CENTURY PERSPECTIVE | Jul 2012

Thursday, October 25, 2012

Edge: Constructor Theory

[DAVID DEUTSCH:] Some considerable time ago we were discussing my idea, new at the time, for constructor theory, which was and is an idea I had for generalizing the quantum theory of computation to cover not just computation but all physical processes. I guessed and still guess that this is going to provide a new mode of description of physical systems and laws of physics. It will also have new laws of its own which will be deeper than the deepest existing theories, such as quantum theory and relativity. At the time, I was very enthusiastic about this, and what intervened between then and now is that writing a book took much longer than I expected. But now I'm back to it, and we're working on constructor theory and, if anything, I would say it's fulfilling its promise more than I expected and sooner than I expected.

One of the first rather unexpected yields of this theory has been a new foundation for information theory. There's a notorious problem with defining information within physics, namely that on the one hand information is purely abstract, and the original theory of computation as developed by Alan Turing and others regarded computers and the information they manipulate purely abstractly as mathematical objects. Many mathematicians to this day don't realize that information is physical and that there is no such thing as an abstract computer. Only a physical object can compute things.

On the other hand, physicists have always known that in order to do the work that the theory of information does within physics, such as informing the theory of statistical mechanics, and thereby, thermodynamics (the second law of thermodynamics), information has to be a physical quantity. And yet, information is independent of the physical object that it resides in.

I'm speaking to you now: Information starts as some kind of electrochemical signals in my brain, and then it gets converted into other signals in my nerves and then into sound waves and then into the vibrations of a microphone, mechanical vibrations, then into electricity and so on, and presumably will eventually go on the Internet. This something has been instantiated in radically different physical objects that obey different laws of physics. Yet in order to describe this process you have to refer to the thing that has remained unchanged throughout the process, which is only the information rather than any obviously physical thing like energy or momentum.

The way to get this substrate independence of information is to refer it to a level of physics that is below and more fundamental than things like laws of motion, that we have been used thinking of as near the lowest, most fundamental level of physics. Constructor theory is that deeper level of physics, physical laws and physical systems, more fundamental than the existing prevailing conception of what physics is (namely particles and waves and space and time and an initial state and laws of motion that describe the evolution of that initial state).

What led to this hope for this new kind of foundation for the laws of physics was really the quantum theory of computation. I had thought for a while that the quantum theory of computation is the whole of physics. The reason why it seemed reasonable to think that was that a universal quantum computer can simulate any other finite physical object with arbitrary accuracy, and that means that the set of all possible motions, which is computations, of a universal computer, corresponds to the set of all possible motions of anything. There is a certain sense in which studying the universal quantum computer is the same thing as studying every other physical object. It contains all possible motions of all possible physical objects within its own possible diversity.

I used to say that the quantum theory of computation is the whole of physics because of this property. But then I realized that that isn't quite true, and there's an important gap in that connection. Namely, although the quantum computer can simulate any other object and can represent any other object so that you can study any object via its characteristic programs, what the quantum theory of computation can't tell you is which program corresponds to which physical object.

This might sound like an inessential technicality, but it's actually of fundamental importance because not knowing which abstraction in the computer corresponds to which object is a little bit like having a bank account and the bank telling you, "Oh, your balance is some number." Unless you know what number it is, you haven't really expressed the whole of the physical situation of you and your bank account. Similarly, if you're only told that your physical system corresponds to some program of the quantum computer, and you haven't said which, then you haven't specified to the whole of physics.

Then I thought, what we need is a generalization of the quantum theory of computation that does say that, that assigns to each program the corresponding real object. That was an early conception of constructor theory, making it directly a generalization of the theory of computation. But then I realizedthat that's not quite the way to go because that still tries to cast constructor theory within the same mold as all existing theories and, therefore, it wouldn't solve this problem of providing an underlying framework. It still would mean that, just as a program has an initial state and then laws of motion (that is, the laws of the operation of the computer) and then a final state (which is the output of the computer), so that way of looking at constructor theory would have simply been a translation of existing physics. It wouldn't have provided anything new.

The new thing, which I think is the key to the fact that constructor theory delivers new content, was that the laws of constructor theory are not about an initial state, laws of motion, final state or anything like that. They are just about which transformations are possible and which are impossible. The laws of motion and that kind of thing are indirect remote consequences of just saying what's possible and what's impossible. Also the laws of constructor theory are not about the constructor. They're not about how you do it, only whether you can do it, and this is analogous to the theory of computation.

The theory of computation isn't about transistors and wires and input/output devices and so on. It's about which transformations of information are possible and which aren't possible. Since we have the universal computer, we know that each possible ones corresponds to a program for a universal computer, but the universal computer can be made in lots of different ways. How you make it is inessential to the deep laws of computation.

In the case of constructor theory, what's important is which transformations of physical objects are possible and which are impossible. When they're possible, you'll be able to do them in lots of different ways usually. When they're impossible, that will always be because some law of physics forbids them, and that is why, as Karl Popper said, the content of a physical theory, of any scientific theory, is in what it forbids and also in how it explains what it forbids.

If you have this theory of what is possible and what is impossible, it implicitly tells you what all the laws of physics are. That basis, very simple basis, is proving very fruitful already, and I have great hopes that various niggling problems and notorious difficulties in existing formulations of physics will be solved by this single idea. It may well take a lot of work to see how, but that's what I expect, and I think that's what we're beginning to see. This is often misunderstood as claiming that only the scientific theories are worth having. Now that, as Popper once remarked, is a silly interpretation. For example, Popper's own theory is a philosophical theory. He certainly wasn't saying that was an illegitimate theory.

In some ways this theory, just like quantum theory and relativity and anything that's fundamental in physics, overlaps with philosophy. So having the right philosophy, which is the philosophy of Karl Popper basically, though not essential, is extremely helpful to avoid going down blind alleys. Popper, I suppose, is most famous for his criterion of demarcation between science and metaphysics; scientific theories are those that are, in principle, testable by experiment; and what he called metaphysical theories (I think they would be better called philosophical theories) are the ones that can't.

Being testable is not as simple a concept as it sounds. Popper investigated in great detail and laid down principles that lead me to the question, in what sense is constructor theory testable? Constructor theory consists of a language in which to express other scientific theories—(Well, that can't be true or false. It can only be convenient or inconvenient)—but also laws. But these laws are not about physical objects. They're laws about other laws. They say that other laws have to obey constructor theoretic principles.

That raises the issue of how you can test a law about laws, because if it says that laws have to have such-and-such a property, you can't actually go around and find a law that doesn't have that property, because experiment could never tell you that that law was true. Fortunately, this problem has been solved by Popper. You have to go indirectly in the case of these laws about laws. I want to introduce the terminology that laws about laws should be called principles. A lot of people already use that kind of terminology, but I'd rather make it standardized.

For example, take the principle of the conservation of energy, which is a statement that all laws have to respect the conservation of energy. Perhaps it's not obvious to you, but there is no experiment that would show a violation of the conservation of energy, because if somebody presented you with an object that produced more energy than it took in, you could always say, "Ah, well, that's due to an invisible thing or a law of motion that's different from what we think, or maybe the formula for energy is different for this object than what we thought," so there's no experiment that could ever refute it and, in fact, in the history of physics, the discovery of the neutrino was made by exactly that method.

It appeared that the law of conservation of energy was not being obeyed in beta decay, and then Pauli suggested that maybe the energy was being carried off by an invisible particle that you couldn't detect. It turned out that he was right, but the way you have to test that is not by doing an experiment on beta decay but by seeing whether the theory, the law that says that the neutrino exists, is successful and independently testable. It's the testability of the law that the principle tells you about, that, in effect, provides the testability of the principle.

One thing I think is important to stress about constructor theory is when I say we want to reformulate physics in terms of what can and can't be done, that sounds like a retreat into operationalism or into positivism or something: that we shouldn't worry about the constructor, that is, the thing that does the transformation, but only in the input and output and whether they are compatible. But actually, that is not how it works in constructor theory.

Constructor theory is all about how it comes about. It just expresses this in a different language. I'm not very familiar with the very popular idea of cybernetics that came about a few decades ago, but I wouldn't be surprised if those ideas that proved at the time not to lead anywhere were actually an early avatar of constructor theory. If so, we'll only be able to see that with hindsight, because some of the ideas of constructor theory are really impossible to have until you have a conceptual framework that is post quantum theory of computation, i.e., after the theory of computation has been explicitly incorporated into physics, not just philosophically. That's what the quantum theory of computation did.

I'm not sure whether von Neumann used the term "constructor theory" or did he just call it the universal constructor? Von Neumann's work in the 1940s is another place where constructor theory could be thought to have its antecedents. But von Neumann was interested in different issues. He was interested in how living things can possibly exist, given what the laws of physics are. This was before the DNA mechanism was discovered.

He was interested in issues of principle, how the existence of a self-replicating object was even consistent with the laws of physics as we know them. He realized that there was an underlying logic and underlying algebra, something that in which one could express this and show what was needed. He actually solved the problem of how a living thing could possibly exist basically by showing that it couldn't possibly work by literally copying itself.

It had to have within it a code, a recipe, a specification or computer program, as we would say today, specifying how to build it and, therefore, the self-replication process had to take place in two stages. He did this all before the DNA system was known, but he never got any further because from my perspective, he never got anywhere with constructor theory or with realizing that this was all at the foundations of physics rather than just the foundations of biology, because he was stuck in this prevailing conception of physics as being about initial conditions, laws of motion and final state – where, among other things, you have to include the constructor in your description of a system, which means that you don't see the laws about the transformation for what they are.

When he found that he couldn't make a mathematical model of a living object by writing down its equations on a piece of paper, he resorted to simplifying the laws of physics and then simplifying them again and again and eventually invented the whole field that we now call cellular automata. It is a very interesting field but it takes us away from real physics, because it abstracts away the laws of physics. What I want to do is to go in the other direction, to integrate it with laws of physics but not as they are now but with the laws of physics that have an underlying algebra that resembles or is a generalization of the theory of computation.

Several strands led towards this. I was lucky enough to be placed in more than one of them. The main thing was that starting with Turing and then Rolf Landauer (who was a lone voice in the 1960s saying that computation is physics—because the theory of computation to this day is regarded by mathematicians as being about abstractions rather than as being about physics), Landauer realized that the concept of a purely abstract computer doesn't make sense, and the theory of computation has to be a theory of what physical objects can do to information. Landauer focused on what restrictions the laws of physics imposed on what kinds of computation can be done. Unfortunately, that was the wrong way around because as we later discovered, the most important thing about the relationship of physics with computation, and the most striking thing, is that quantum theory (i.e., the deepest laws of physics that we know) permit new modes of computation that wouldn't be possible in classical physics. Once you have established the quantum theory of computation, you've got a theory of computation that is wholly within physics, and it's then natural to try to generalize that which is what I wanted to do. So that's one of the directions.

Von Neumann was motivated really by theoretical biology rather than theoretical physics. Another thing that I think inhibited Von Neumann from realizing that his theory was fundamental physics was that he had the wrong idea about quantum theory. He had settled for, and was one of the pioneers of, building a cop-out version of quantum theory that made it into just an operational theory, where you would use quantum theory just to work out and predict the outcomes of experiments rather than express the laws of how the outcome comes about. That was one of the reasons why von Neumann never thought of his own theory as being a generalization of quantum theory, because he didn't really take quantum theory seriously. His contribution to quantum theory was to provide this set of Von Neumann rules that allows you to use the theory in practice without ever wondering what it means.

I came from a different tradition of thinking, via Hugh Everett and Karl Popper in their different ways. Both of them insisted that scientific theories are about what is really there and why observations come about, not just predicting what the observations are. Therefore, I couldn't be satisfied with just an operational version of quantum mechanics.

I had to embrace the Everett or many-universes interpretation of quantum mechanics from the present point of view. The key thing about that is that it is a realistic theory, as the philosophers say. That is, it's a theory that purports to describe what really happens rather than just our experiences of what happens. Once you think of quantum theory that way, it's only a very small step to realizing that, first of all, that computation is the quantum theory of computation, which was my earlier work, and then that the quantum theory of computation is not sufficient to provide the foundation for the whole of physics. So what's the rest? Well, the rest is constructor theory.

What's needed in constructor theory is to express it in terms of that can be integrated with the rest of physics, formulae, equations, because only then can it make contact with other scientific theories. The principles of constructor theory then constrain the laws of other theories (which I call subsidiary theories now). The constructor theory is the deepest theory, and everything else is subsidiary to it. It constrains them and that, then, leads to contact with experiment.

The key thing to do apart from guessing what the actual laws are is to find a way of expressing them. The first item on the agenda then is to set up a constructor theoretic algebra that's an algebra in which you can do two things. One is to express any other scientific theory in terms of what transformations can or cannot be performed. The analog in the prevailing formulation of physics would be something like differential equations, but in constructor theory it will be an algebra. And then to use that algebra also to express the laws of constructor theory, which won't be expressed in terms of subsidiary theories. They will just make assertions about subsidiary theories.

Chiara Marletto (a student I'm working with) and I are working on that algebra. It's a conceptual jolt to think in terms of it rather than in the terms that have been traditional in physics for the last few decades. We try and think what it means, find contradictions between different strands of thought about what it means, realize that the algebra and the expressions that we write in the algebra doesn't quite make sense, change the algebra, see what that means and so on. It's a process of doing math, doing algebra by working out things, interleaved with trying to understand what those things mean. This rather mirrors how the pioneers of quantum theory developed their theory too.

It was the same thing and, in fact, one of the formulations of quantum theory, namely matrix mechanics as invented by Heisenberg and others, isn't based on the differential equation paradigm but is more algebraic and it, in fact, is another thing that can be seen as a precursor of constructor theory.

We haven't yet succeeded in making a viable algebra, but even with the rudimentary form of it that we have now, we have got some remarkable results (this was mainly done by Chiara), which have almost magically provided a deeper foundation for information theory than was possible within physics before.

Like all fundamental theories, it's difficult to predict what effect they will have precisely because they are going to change things at a fundamental level. But there is one big thing that I'm pretty sure the constructor theoretic way of looking at physics has to offer our worldview in terms of everyday life: and that is optimism. Optimism in my terminology doesn't mean expecting that things will turn out well all the time. It's this very specific thing that I think captures the historical, philosophical trend of what optimism has meant if you remove the nonsense. Namely, the optimistic view is not that problems will not occur, but that all problems and all evils are caused by lack of knowledge, and the converse of that is that all evils are soluble given the right knowledge. And knowledge can be found by the methods of conjecture, criticism, and so on that we know is the way of creating knowledge.

Although this sounds like a statement at a very human level, because it's about knowledge and evils and being able to do things and so on, it is directly linked with constructor theory at the most fundamental level because of the fundamental dichotomy in constructor theory, which claims that the whole of science is to be formulated in terms of the difference between transformations that are possible and those that are impossible, and there isn't a third possibility. That's the thing. The whole thing doesn't work if there's a third possibility.

If a task, a transformation, is impossible then constructor theory says it must be because there is some law of physics that makes it impossible. Conversely, if there isn't a law of physics that makes it impossible, then it's possible. There is no third possibility. What does possible mean? In the overwhelming majority of cases, though some things are possible because they happen spontaneously, things that are possible are possible because the right knowledge embodied in the right physical object would make them happen. Since the dichotomy is between that which is forbidden by the laws of physics and that which is possible with the right knowledge, and there isn't any other possibility, this tells us that all evils are due to lack of knowledge.

This is counterintuitive. It's contrary to conventional wisdom, and it's contrary to our intuitive or at least culturally intuitive way of looking at the world. I find myself grasping for a third possibility. Isn't there something that we can't do even though there's no actual law of physics that says we won't be able to do it? Well, no, there can't be. This is built into constructor theory. There's no way of getting around it, and I think once you've seen that it's at the foundations of physics, it becomes more and more natural. It becomes more and more sort of obvious in the sense of it's weird, but what else could it be?

It's rather like the intuitive shift that comes from realizing that people in Australia really are upside-down compared with us, and they really are down there through the earth. One can know this intellectually, but to actually think in those terms takes an effort. It's something that we all learn at some point, accept intellectually at some point if we're rational, but then to incorporate that into our world-view changes us. It changes us for instance because whole swaths of supernatural thinking are made impossible by truly realizing that the people in Australia are upside-down, and similarly whole swaths of irrational thinking are made impossible by realizing that, in the sense I've just described, there is no third possibility between being able to do it if we have the right knowledge and its being forbidden by the laws of physics.

The stereotype of how new ideas get into fundamental science is: First somebody has the idea. Everyone thinks they're crazy, and eventually they're vindicated. I don't think it happens like that very often. There are cases where it does, but I think that much more often, and this is my own experience when I've had new ideas, is that it's not that people say, "You're crazy; that can't be true." They say, "Yes, that's nice. Well done," and then they go off and ignore it, and then eventually people say, "Oh, well, maybe it leads to this, and maybe it leads to that, and maybe it's worth working on. Maybe it's fruitful," and then eventually they work more on it.

This has happened in several of the things that I've done, and this is what I would expect to happen with constructor theory. I haven't had anyone tell me that this is crazy and it can't be right, but I'm certainly in the stage of most people or most physicists saying, "Well, that's nice. That's interesting. Well done," and then they go away and ignore it.

No one else is actually working on it at the moment. Several of our colleagues have expressed something between curiosity and substantial interest which may well go up as soon as we have results. At the moment, there's no publication. I've submitted a philosophical paper which hasn't even been published yet. When that comes out, it'll get a wider readership. People will understand what it's about, but while in philosophy you can write a paper that just has hopes in it or interpretations of things, in physics you need results. When we have our first few papers that have results, I think that an exponential process of people working on it will begin—if it's true. Of course, it might be that some of these results are of the form "it can't be true", in which case it will end up as an interesting footnote to the history of physics.

I had to write the philosophical paper first because there's quite a lot of philosophical foundation to constructor theory, and to put that into a physics paper would have simply made it too long and, to physicists, too boring. So I have to write something that we can refer to. It's philosophical paper first, and then the next thing was going to be constructor theory algebra which is the language and formalism and showing how both old laws and new constructor theoretic laws can be expressed, but now it's likely that the first paper on constructor theory will be constructor theoretic information theory, because it's yielded unexpectedly good results there.

We're talking about the foundations of physics here, so the question is whether the theory is consistent, whether it's fruitful, whether it leads to new discoveries. These foundational theories are of interest to people who like foundational theories, but their usefulness comes in their fruitfulness later.

Quantum theory is, again, a very good example. Almost nobody was actually interested in quantum theory except a few people who work on the foundations of quantum theory. But now several decades after its discovery, everybody who works on microchips or everyone who works on information or cryptography and so on has to use quantum mechanics, and everybody who wants to understand their position in the universe—what we are—has to take a view of what quantum theory tells us about what we are.

For example, you have to take a view about whether it's really true that we exist in vast numbers of parallel copies, some of them slightly different, some of them are the same as I think quantum mechanics inevitably leads to, or not. But there's no rational way of not taking a position on that issue. Although apart from the issue of optimism which is an unexpectedly direct connection to the everyday level, we can't tell at the moment what constructor theory will tell us about ourselves and our position in the universe and what every reasonable person should know, until we work out what the theory says, which we can't do until we work it out properly within the context of theoretical physics.

I'm interested in basically anything that's fundamental. It's not confined to fundamental physics, but for me that's what it all revolves around. In the case of constructor theory, how this is going to develop totally depends on what the theory turns out to say and even more fundamentally, whether it turns out to be true. If it turns out to be false that one cannot build a foundation to physics in the constructor theoretic way, that will be extremely interesting because that will mean that whole lines of argument that seemed to make it inevitable that we need a constructor theory are actually wrong, and whole lines of unification that seem to connect different fields don't connect them and yet, therefore, they must be connected in some other way, because the truth of the world has to be connected.

If it turns out to be wrong, the chances are it will be found to be wrong long before it's falsified. This again is the typical way in scientific theories. What gets the headlines is if you do an experiment and you predict a certain particle, and it doesn't appear, and then you're proved wrong, but actually the overwhelming majority of scientific theories are proved wrong long before they ever get tested. They're proved wrong by being internally inconsistent or being inconsistent with other theories that we believe to be true, or most often they're proved wrong by not doing the job of explanation that they were designed to do. So if you have a theory that is supposed to, for example, explain the second law of thermodynamics, and why there is irreversibility when the fundamental laws of physics are reversible, and then you find by analyzing this theory that it doesn't actually do that, then you don't have to bother to test it, because it doesn't address the problem that it was designed to address. If constructor theory turns out to be false, I think it's overwhelmingly likely that it will be by that method that it just doesn't do this unification job or foundational job that it was designed to do.

Then we would have to learn the lesson of how it turned out to be wrong. Turning out to be wrong is not a disgrace. It's not like in politics where if you lose the election then you've lost. In science, if your idea that looked right turns out to be wrong, you've learned something.

One of the central philosophical motivations for why I do fundamental physics is that I'm interested in what the world is like; that is, not just the world of our observations, what we see, but the invisible world, the invisible processes and objects that bring about the visible. Because the visible is only the tiny, superficial and parochial sheen on top of the real reality, and the amazing thing about the world and our place in it is that we can discover the real reality.

We can discover what is at the center of stars even though we've never been there. We can find out that those cold, tiny objects in the sky that we call stars are actually million-kilometer, white, hot, gaseous spheres. They don't look like that. They look like cold dots, but we know different. We know that the invisible reality is there giving rise to our visible perceptions.

That science has to be about that has been for many decades a minority and unpopular view among philosophers and, to a great extent, regrettably even among scientists. They have taken the view that science, just because it is characterized by experimental tests, has to be only about experimental tests, but that's a trap. If that were so, it would mean that science is only about humans and not even everything about humans but about human experience only. It's solipsism. It's purporting to have a rigorous objective world view that only observations count, but ending up by its own inexorable logic as saying that only human experience is real, which is solipsism.

I think it's important to regard science not as an enterprise for the purpose of making predictions, but as an enterprise for the purpose of discovering what the world is really like, what is really there, how it behaves and why. Which is tested by observation. But it is absolutely amazing that the tiny little parochial and weak and error-prone access that we have to observations is capable of testing theories and knowledge of the whole of reality that has tremendous reach far beyond our experience. And yet we know about it. That's the amazing thing about science. That's the aspect of science that I want to pursue.
Edge | A Talk With David Deutsch | Constructor Theory | Oct 2012