How Will Science Help Us In The Future Essay


(KEVIN KELLY:) Science will continue to surprise us with what it discovers and creates; then it will astound us by devising new methods to surprises us. At the core of science's self-modification is technology. New tools enable new structures of knowledge and new ways of discovery. The achievement of science is to know new things; the evolution of science is to know them in new ways. What evolves is less the body of what we know and more the nature of our knowing.

Technology is, in its essence, new ways of thinking. The most powerful type of technology, sometimes called enabling technology, is a thought incarnate which enables new knowledge to find and develop news ways to know. This kind of recursive bootstrapping is how science evolves. As in every type of knowledge, it accrues layers of self-reference to its former state.

New informational organizations are layered upon the old without displacement, just as in biological evolution. Our brains are good examples. We retain reptilian reflexes deep in our minds (fight or flight) while the more complex structuring of knowledge (how to do statistics) is layered over those primitive networks. In the same way, older methods of knowing (older scientific methods) are not jettisoned; they are simply subsumed by new levels of order and complexity. But the new tools of observation and measurement, and the new technologies of knowing, will alter the character of science, even while it retains the old methods.

I'm willing to bet the scientific method 400 years from now will differ from today's understanding of science more than today's science method differs from the proto-science used 400 years ago. A sensible forecast of technological innovations in the next 400 years is beyond our imaginations (or at least mine), but we can fruitfully envision technological changes that might occur in the next 50 years.

Based on the suggestions of the observers above, and my own active imagination, I offer the following as possible near-term advances in the evolution of the scientific method.

Compiled Negative Results — Negative results are saved, shared, compiled and analyzed, instead of being dumped. Positive results may increase their credibility when linked to negative results. We already have hints of this in the recent decision of biochemical journals to require investigators to register early phase 1 clinical trials. Usually phase 1 trials of a drug end in failure and their negative results are not reported. As a public heath measure, these negative results should be shared. Major journals have pledged not to publish the findings of phase 3 trials if their earlier phase 1 results had not been reported, whether negative or not.

Triple Blind Experiments – In a double blind experiment neither researcher nor subject are aware of the controls, but both are aware of the experiment. In a triple blind experiment all participants are blind to the controls and to the very fact of the experiment itself. The way of science depends on cheap non-invasive sensor running continuously for years generating immense streams of data. While ordinary life continues for the subjects, massive amounts of constant data about their lifestyles are drawn and archived. Out of this huge database, specific controls, measurements and variables can be "isolated" afterwards. For instance, the vital signs and lifestyle metrics of a hundred thousand people might be recorded in dozens of different ways for 20-years, and then later analysis could find certain variables (smoking habits, heart conditions) and certain ways of measuring that would permit the entire 20 years to be viewed as an experiment – one that no one knew was even going on at the time. This post-hoc analysis depends on pattern recognition abilities of supercomputers. It removes one more variable (knowledge of experiment) and permits greater freedom in devising experiments from the indiscriminate data.

Combinatorial Sweep Exploration – Much of the unknown can be explored by systematically creating random varieties of it at a large scale. You can explore the composition of ceramics (or thin films, or rare-earth conductors) by creating all possible types of ceramic (or thin films, or rare-earth conductors), and then testing them in their millions. You can explore certain realms of proteins by generating all possible variations of that type of protein and they seeing if they bind to a desired disease-specific site. You can discover new algorithms by automatically generating all possible programs and then running them against the desired problem. Indeed all possible Xs of almost any sort can be summoned and examined as a way to study X. None of this combinatorial exploration was even thinkable before robotics and computers; now both of these technologies permit this brute force style of science. The parameters of the emergent "library" of possibilities yielded by the sweep become the experiment. With sufficient computational power, together with a pool of proper primitive parts, vast territories unknown to science can be probed in this manner.

Evolutionary Search – A combinatorial exploration can be taken even further. If new libraries of variations can be derived from the best of a previous generation of good results, it is possible to evolve solutions. The best results are mutated and bred toward better results. The best testing protein is mutated randomly in thousands of way, and the best of that bunch kept and mutated further, until a lineage of proteins, each one more suited to the task than its ancestors, finally leads to one that works perfectly. This method can be applied to computer programs and even to the generation of better hypothesis.

Multiple Hypothesis Matrix – Instead of proposing a series of single hypothesis, in which each hypothesis is falsified and discarded until one theory finally passes and is verified, a matrix of many hypothesis scenarios are proposed and managed simultaneously. An experiment travels through the matrix of multiple hypothesis, some of which are partially right and partially wrong. Veracity is statistical; more than one thesis is permitted to stand with partial results. Just as data were assigned a margin of error, so too will hypothesis. An explanation may be stated as: 20% is explained by this theory, 35% by this theory, and 65% by this theory. A matrix also permits experiments with more variables and more complexity than before.

Pattern Augmentation – Pattern-seeking software which recognizes a pattern in noisy results. In large bodies of information with many variables, algorithmic discovery of patterns will become necessary and common. These exist in specialized niches of knowledge (such particle smashing) but more general rules and general-purpose pattern engines will enable pattern-seeking tools to become part of all data treatment.

Adaptive Real Time Experiments – Results evaluated, and large-scale experiments modified in real time. What we have now is primarily batch-mode science. Traditionally, the experiment starts, the results are collected, and then conclusions reached. After a pause the next experiment is designed in response, and then launched. In adaptive experiments, the analysis happens in parallel with collection, and the intent and design of the test is shifted on the fly. Some medical tests are already stopped or re-evaluated on the basis of early findings; this method would extend that method to other realms. Proper methods would be needed to keep the adaptive experiment objective.

AI Proofs – Artificial intelligence will derive and check the logic of an experiment. Ever more sophisticated and complicated science experiments become ever more difficult to judge. Artificial expert systems will at first evaluate the scientific logic of a paper to ensure the architecture of the argument is valid. It will also ensure it publishes the required types of data. This "proof review" will augment the peer-review of editors and reviewers. Over time, as the protocols for an AI check became standard, AI can score papers and proposals for experiments for certain consistencies and structure. This metric can then be used to categorize experiments, to suggest improvements and further research, and to facilitate comparisons and meta-analysis. A better way to inspect, measure and grade the structure of experiments would also help develop better kinds of experiments.

Wiki-Science – The average number of authors per paper continues to rise. With massive collaborations, the numbers will boom. Experiments involving thousands of investigators collaborating on a "paper" will commonplace. The paper is ongoing, and never finished. It becomes a trail of edits and experiments posted in real time — an ever evolving "document." Contributions are not assigned. Tools for tracking credit and contributions will be vital. Responsibilities for errors will be hard to pin down. Wiki-science will often be the first word on a new area. Some researchers will specialize in refining ideas first proposed by wiki-science.

Defined Benefit Funding — Ordinarily science is funded by the experiment (results not guaranteed) or by the investigator (nothing guaranteed). The use of prize money for particular scientific achievements will play greater roles. A goal is defined, funding secured for the first to reach it, and the contest opened to all. The Turing Test prize awarded to the first computer to pass the Turing Test as a passable intelligence. Defined Benefit Funding can also be combined with prediction markets, which set up a marketplace of bets on possible innovations. The bet winnings can encourage funding of specific technologies.

Zillionics – Ubiquitous always-on sensors in bodies and environment will transform medical, environmental, and space sciences. Unrelenting rivers of sensory data will flow day and night from zillions of sources. The exploding number of new, cheap, wireless, and novel sensing tools will require new types of programs to distill, index and archive this ocean of data, as well as to find meaningful signals in it. The field of "zillionics" — - dealing with zillions of data flows — - will be essential in health, natural sciences, and astronomy. This trend will require further innovations in statistics, math, visualizations, and computer science. More is different. Zillionics requires a new scientific perspective in terms of permissible errors, numbers of unknowns, probable causes, repeatability, and significant signals.

Deep Simulations – As our knowledge of complex systems advances, we can construct more complex simulations of them. Both the success and failures of these simulations will help us to acquire more knowledge of the systems. Developing a robust simulation will become a fundamental part of science in every field. Indeed the science of making viable simulations will become its own specialty, with a set of best practices, and an emerging theory of simulations. And just as we now expect a hypothesis to be subjected to the discipline of being stated in mathematical equations, in the future we will expect all hypothesis to be exercised in a simulation. There will also be the craft of taking things known only in simulation and testing them in other simulations—sort of a simulation of a simulation.

Hyper-analysis Mapping – Just as meta-analysis gathered diverse experiments on one subject and integrated their (sometimes contradictory) results into a large meta-view, hyper-analysis creates an extremely large-scale view by pulling together meta-analysis. The cross-links of references, assumptions, evidence and results are unraveled by computation, and then reviewed at a larger scale which may include data and studies adjacent but not core to the subject. Hyper-mapping tallies not only what is known in a particular wide field, but also emphasizes unknowns and contradictions based on what is known outside that field. It is used to integrate a meta-analysis with other meta-results, and to spotlight "white spaces" where additional research would be most productive.

Return of the Subjective – Science came into its own when it managed to refuse the subjective and embrace the objective. The repeatability of an experiment by another, perhaps less enthusiastic, observer was instrumental in keeping science rational. But as science plunges into the outer limits of scale – at the largest and smallest ends – and confronts the weirdness of the fundamental principles of matter/energy/information such as that inherent in quantum effects, it may not be able to ignore the role of observer. Existence seems to be a paradox of self-causality, and any science exploring the origins of existence will eventually have to embrace the subjective, without become irrational. The tools for managing paradox are still undeveloped.

We live in a golden age of technological, medical, scientific and social progress. Look at our computers! Look at our phones! Twenty years ago, the internet was a creaky machine for geeks. Now we can’t imagine life without it. We are on the verge of medical breakthroughs that would have seemed like magic only half a century ago: cloned organs, stem-cell therapies to repair our very DNA. Even now, life expectancy in some rich countries is improving by five hours a day. A day! Surely immortality, or something very like it, is just around the corner.

The notion that our 21st-century world is one of accelerating advances is so dominant that it seems churlish to challenge it. Almost every week we read about ‘new hopes’ for cancer sufferers, developments in the lab that might lead to new cures, talk of a new era of space tourism and super-jets that can fly round the world in a few hours. Yet a moment’s thought tells us that this vision of unparalleled innovation can’t be right, that many of these breathless reports of progress are in fact mere hype, speculation – even fantasy.

Yet there once was an age when speculation matched reality. It spluttered to a halt more than 40 years ago. Most of what has happened since has been merely incremental improvements upon what came before. That true age of innovation – I’ll call it the Golden Quarter – ran from approximately 1945 to 1971. Just about everything that defines the modern world either came about, or had its seeds sown, during this time. The Pill. Electronics. Computers and the birth of the internet. Nuclear power. Television. Antibiotics. Space travel. Civil rights.

There is more. Feminism. Teenagers. The Green Revolution in agriculture. Decolonisation. Popular music. Mass aviation. The birth of the gay rights movement. Cheap, reliable and safe automobiles. High-speed trains. We put a man on the Moon, sent a probe to Mars, beat smallpox and discovered the double-spiral key of life. The Golden Quarter was a unique period of less than a single human generation, a time when innovation appeared to be running on a mix of dragster fuel and dilithium crystals.

Today, progress is defined almost entirely by consumer-driven, often banal improvements in information technology. The US economist Tyler Cowen, in his essay The Great Stagnation (2011), argues that, in the US at least, a technological plateau has been reached. Sure, our phones are great, but that’s not the same as being able to fly across the Atlantic in eight hours or eliminating smallpox. As the US technologist Peter Thiel once put it: ‘We wanted flying cars, we got 140 characters.’

Economists describe this extraordinary period in terms of increases in wealth. After the Second World War came a quarter-century boom; GDP-per-head in the US and Europe rocketed. New industrial powerhouses arose from the ashes of Japan. Germany experienced its Wirtschaftswunder. Even the Communist world got richer. This growth has been attributed to massive postwar government stimulus plus a happy nexus of low fuel prices, population growth and high Cold War military spending.

But alongside this was that extraordinary burst of human ingenuity and societal change. This is commented upon less often, perhaps because it is so obvious, or maybe it is seen as a simple consequence of the economics. We saw the biggest advances in science and technology: if you were a biologist, physicist or materials scientist, there was no better time to be working. But we also saw a shift in social attitudes every bit as profound. In even the most enlightened societies before 1945, attitudes to race, sexuality and women’s rights were what we would now consider antediluvian. By 1971, those old prejudices were on the back foot. Simply put, the world had changed.

Subscribe to Aeon’s Newsletter

But surely progress today is real? Well, take a look around. Look up and the airliners you see are basically updated versions of the ones flying in the 1960s – slightly quieter Tristars with better avionics. In 1971, a regular airliner took eight hours to fly from London to New York; it still does. And in 1971, there was one airliner that could do the trip in three hours. Now, Concorde is dead. Our cars are faster, safer and use less fuel than they did in 1971, but there has been no paradigm shift.

And yes, we are living longer, but this has disappointingly little to do with any recent breakthroughs. Since 1970, the US Federal Government has spent more than $100 billion in what President Richard Nixon dubbed the ‘War on Cancer’. Far more has been spent globally, with most wealthy nations boasting well-funded cancer‑research bodies. Despite these billions of investment, this war has been a spectacular failure. In the US, the death rates for all kinds of cancer dropped by only 5 per cent in the period 1950-2005, according to the National Center for Health Statistics. Even if you strip out confounding variables such as age (more people are living long enough to get cancer) and better diagnosis, the blunt fact is that, with most kinds of cancer, your chances in 2014 are not much better than they were in 1974. In many cases, your treatment will be pretty much the same.

After the dizzying breakthroughs of the 20th century, physics seems to have ground to a halt

For the past 20 years, as a science writer, I have covered such extraordinary medical advances as gene therapy, cloned replacement organs, stem-cell therapy, life-extension technologies, the promised spin-offs from genomics and tailored medicine. None of these new treatments is yet routinely available. The paralyzed still cannot walk, the blind still cannot see. The human genome was decoded (one post-Golden Quarter triumph) nearly 15 years ago and we’re still waiting to see the benefits that, at the time, were confidently asserted to be ‘a decade away’. We still have no real idea how to treat chronic addiction or dementia. The recent history of psychiatric medicine is, according to one eminent British psychiatrist I spoke to, ‘the history of ever-better placebos’. And most recent advances in longevity have come about by the simple expedient of getting people to give up smoking, eat better, and take drugs to control blood pressure.

There has been no new Green Revolution. We still drive steel cars powered by burning petroleum spirit or, worse, diesel. There has been no new materials revolution since the Golden Quarter’s advances in plastics, semi-conductors, new alloys and composite materials. After the dizzying breakthroughs of the early- to mid-20th century, physics seems (Higgs boson aside) to have ground to a halt. String Theory is apparently our best hope of reconciling Albert Einstein with the Quantum world, but as yet, no one has any idea if it is even testable. And nobody has been to the Moon for 42 years.

Why has progress stopped? Why, for that matter, did it start when it did, in the dying embers of the Second World War?

One explanation is that the Golden Age was the simple result of economic growth and technological spinoffs from the Second World War. It is certainly true that the war sped the development of several weaponisable technologies and medical advances. The Apollo space programme probably could not have happened when it did without the aerospace engineer Wernher Von Braun and the V-2 ballistic missile. But penicillin, the jet engine and even the nuclear bomb were on the drawing board before the first shots were fired. They would have happened anyway.

Conflict spurs innovation, and the Cold War played its part – we would never have got to the Moon without it. But someone has to pay for everything. The economic boom came to an end in the 1970s with the collapse of the 1944 Bretton Woods trading agreements and the oil shocks. So did the great age of innovation. Case closed, you might say.

And yet, something doesn’t quite fit. The 1970s recession was temporary: we came out of it soon enough. What’s more, in terms of Gross World Product, the world is between two and three times richer now than it was then. There is more than enough money for a new Apollo, a new Concorde and a new Green Revolution. So if rapid economic growth drove innovation in the 1950s and ’60s, why has it not done so since?

In The Great Stagnation, Cowen argues that progress ground to a halt because the ‘low-hanging fruit’ had been plucked off. These fruits include the cultivation of unused land, mass education, and the capitalisation by technologists of the scientific breakthroughs made in the 19th century. It is possible that the advances we saw in the period 1945-1970 were similarly quick wins, and that further progress is much harder. Going from the prop-airliners of the 1930s to the jets of the 1960s was, perhaps, just easier than going from today’s aircraft to something much better.

But history suggests that this explanation is fanciful. During periods of technological and scientific expansion, it has often seemed that a plateau has been reached, only for a new discovery to shatter old paradigms completely. The most famous example was when, in 1900, Lord Kelvin declared physics to be more or less over, just a few years before Einstein proved him comprehensively wrong. As late as the turn of the 20th century, it was still unclear how powered, heavier-than-air aircraft would develop, with several competing theories left floundering in the wake of the Wright brothers’ triumph (which no one saw coming).

Lack of money, then, is not the reason that innovation has stalled. What we do with our money might be, however. Capitalism was once the great engine of progress. It was capitalism in the 18th and 19th centuries that built roads and railways, steam engines and telegraphs (another golden era). Capital drove the industrial revolution.

Now, wealth is concentrated in the hands of a tiny elite. A report by Credit Suisse this October found that the richest 1 per cent of humans own half the world’s assets. That has consequences. Firstly, there is a lot more for the hyper-rich to spend their money on today than there was in the golden age of philanthropy in the 19th century. The superyachts, fast cars, private jets and other gewgaws of Planet Rich simply did not exist when people such as Andrew Carnegie walked the earth and, though they are no doubt nice to have, these fripperies don’t much advance the frontiers of knowledge. Furthermore, as the French economist Thomas Piketty pointed out in Capital (2014), money now begets money more than at any time in recent history. When wealth accumulates so spectacularly by doing nothing, there is less impetus to invest in genuine innovation.

the new ideal is to render your own products obsolete as fast as possible

During the Golden Quarter, inequality in the world’s economic powerhouses was, remarkably, declining. In the UK, that trend levelled off a few years later, to reach a historic low point in 1977. Is it possible that there could be some relationship between equality and innovation? Here’s a sketch of how that might work.

As success comes to be defined by the amount of money one can generate in the very short term, progress is in turn defined not by making things better, but by rendering them obsolete as rapidly as possible so that the next iteration of phones, cars or operating systems can be sold to a willing market.

In particular, when share prices are almost entirely dependent on growth (as opposed to market share or profit), built-in obsolescence becomes an important driver of ‘innovation’. Half a century ago, makers of telephones, TVs and cars prospered by building products that their buyers knew (or at least believed) would last for many years. No one sells a smartphone on that basis today; the new ideal is to render your own products obsolete as fast as possible. Thus the purpose of the iPhone 6 is not to be better than the iPhone 5, but to make aspirational people buy a new iPhone (and feel better for doing so). In a very unequal society, aspiration becomes a powerful force. This is new, and the paradoxical result is that true innovation, as opposed to its marketing proxy, is stymied. In the 1960s, venture capital was willing to take risks, particularly in the emerging electronic technologies. Now it is more conservative, funding start-ups that offer incremental improvements on what has gone before.

But there is more to it than inequality and the failure of capital.

During the Golden Quarter, we saw a boom in public spending on research and innovation. The taxpayers of Europe, the US and elsewhere replaced the great 19th‑century venture capitalists. And so we find that nearly all the advances of this period came either from tax-funded universities or from popular movements. The first electronic computers came not from the labs of IBM but from the universities of Manchester and Pennsylvania. (Even the 19th-century analytical engine of Charles Babbage was directly funded by the British government.) The early internet came out of the University of California, not Bell or Xerox. Later on, the world wide web arose not from Apple or Microsoft but from CERN, a wholly public institution. In short, the great advances in medicine, materials, aviation and spaceflight were nearly all pump-primed by public investment. But since the 1970s, an assumption has been made that the private sector is the best place to innovate.

The story of the past four decades might seem to cast doubt on that belief. And yet we cannot pin the stagnation of ingenuity on a decline in public funding. Tax spending on research and development has, in general, increased in real and relative terms in most industrialised nations even since the end of the Golden Quarter. There must be another reason why this increased investment is not paying more dividends.

Could it be that the missing part of the jigsaw is our attitude towards risk? Nothing ventured, nothing gained, as the saying goes. Many of the achievements of the Golden Quarter just wouldn’t be attempted now. The assault on smallpox, spearheaded by a worldwide vaccination campaign, probably killed several thousand people, though it saved tens of millions more. In the 1960s, new medicines were rushed to market. Not all of them worked and a few (thalidomide) had disastrous consequences. But the overall result was a medical boom that brought huge benefits to millions. Today, this is impossible.

The time for a new drug candidate to gain approval in the US rose from less than eight years in the 1960s to nearly 13 years by the 1990s. Many promising new treatments now take 20 years or more to reach the market. In 2011, several medical charities and research institutes in the UK accused EU-driven clinical regulations of ‘stifling medical advances’. It would not be an exaggeration to say that people are dying in the cause of making medicine safer.

Risk-aversion has become a potent weapon in the war against progress on other fronts. In 1992, the Swiss genetic engineer Ingo Potrykus developed a variety of rice in which the grain, rather than the leaves, contain a large concentration of Vitamin A. Deficiency in this vitamin causes blindness and death among hundreds of thousands every year in the developing world. And yet, thanks to a well-funded fear-mongering campaign by anti-GM fundamentalists, the world has not seen the benefits of this invention.

Apollo couldn’t happen today, not because we don’t want to go to the Moon, but because the risk would be unacceptable

In the energy sector, civilian nuclear technology was hobbled by a series of mega-profile ‘disasters’, including Three Mile Island (which killed no one) and Chernobyl (which killed only dozens). These incidents caused a global hiatus into research that could, by now, have given us safe, cheap and low-carbon energy. The climate change crisis, which might kill millions, is one of the prices we are paying for 40 years of risk-aversion.

Apollo almost certainly couldn’t happen today. That’s not because people aren’t interested in going to the Moon any more, but because the risk – calculated at a couple-of-per-cent chance of astronauts dying – would be unacceptable. Boeing took a huge risk when it developed the 747, an extraordinary 1960s machine that went from drawing board to flight in under five years. Its modern equivalent, the Airbus A380 (only slightly larger and slightly slower), first flew in 2005 – 15 years after the project go-ahead. Scientists and technologists were generally celebrated 50 years ago, when people remembered what the world was like before penicillin, vaccination, modern dentistry, affordable cars and TV. Now, we are distrustful and suspicious – we have forgotten just how dreadful the world was pre-Golden Quarter.

we could be in a world where Alzheimer’s was treatable, clean nuclear power had ended the threat of climate change, and cancer was on the back foot

Risk played its part, too, in the massive postwar shift in social attitudes. People, often the young, were prepared to take huge, physical risks to right the wrongs of the pre-war world. The early civil rights and anti-war protestors faced tear gas or worse. In the 1960s, feminists faced social ridicule, media approbation and violent hostility. Now, mirroring the incremental changes seen in technology, social progress all too often finds itself down the blind alleyways of political correctness. Student bodies used to be hotbeds of dissent, even revolution; today’s hyper-conformist youth is more interested in the policing of language and stifling debate when it counters the prevailing wisdom. Forty years ago a burgeoning media allowed dissent to flower. Today’s very different social media seems, despite democratic appearances, to be enforcing a climate of timidity and encouraging groupthink.

Does any of this really matter? So what if the white heat of technological progress is cooling off a bit? The world is, in general, far safer, healthier, wealthier and nicer than it has ever been. The recent past was grim; the distant past disgusting. As Steven Pinker and others have argued, levels of violence in most human societies had been declining since well before the Golden Quarter and have continued to decline since.

We are living longer. Civil rights have become so entrenched that gay marriage is being legalised across the world and any old-style racist thinking is met with widespread revulsion. The world is better in 2014 than it was in 1971.

And yes, we have seen some impressive technological advances. The modern internet is a wonder, more impressive in many ways than Apollo. We might have lost Concorde but you can fly across the Atlantic for a couple of days’ wages – remarkable. Sci-fi visions of the future often had improbable spacecraft and flying cars but, even in Blade Runner’s Los Angeles of 2019, Rick Deckard had to use a payphone to call Rachael.

But it could have been so much better. If the pace of change had continued, we could be living in a world where Alzheimer’s was treatable, where clean nuclear power had ended the threat of climate change, where the brilliance of genetics was used to bring the benefits of cheap and healthy food to the bottom billion, and where cancer really was on the back foot. Forget colonies on the Moon; if the Golden Quarter had become the Golden Century, the battery in your magic smartphone might even last more than a day.

Syndicate this Essay

Future of TechnologyHistory of ScienceHistory of TechnologyProgress & ModernityAll topics →

Michael Hanlon

was a science journalist whose work appeared in The Sunday Times and The Daily Telegraph, among others. His last book was In the Interests of Safety (2014), co-written with Tracey Brown. He lived in London.

Categories: 1

0 Replies to “How Will Science Help Us In The Future Essay”

Leave a comment

L'indirizzo email non verrà pubblicato. I campi obbligatori sono contrassegnati *