Technology

3 technologies poised to change food and the planet

Agriculture’s impact on the planet is massive and relentless. Roughly 40 per cent of the Earth’s surface is used for cropland and grazing. The number of domestic animals far outweighs remaining wild populations. Every day, more primary forest falls against a tide of crops and pasture and each year an area as large as the United Kingdom is lost. If humanity is to have a hope of addressing climate change, we must reimagine farming.

COVID-19 has also exposed weaknesses with current food systems. Agricultural scientists have known for decades that farm labour can be exploitative and hard, so it should surprise no one that farm owners had trouble importing labour to keep farms running as they struggled to ensure food workers stay free from the virus.

Similarly, “just enough, just in time” food supply chains are efficient but offer little redundancy. And pushing farmland into the wilds connects humans with reservoirs of viruses that — when they enter the human population — prove devastating.

To address these challenges, new technologies promise a greener approach to food production and focus on more plant-based, year-round, local and intensive production. Done right, three technologies — vertical, cellular and precision agriculture — can remake the relationship to land and food.

Farm in a box

Vertical farming — the practice of growing food in stacked trays — isn’t new; innovators have been growing crops indoors since Roman times. What is new is the efficiency of LED lighting and advanced robotics that allow vertical farms today to produce 20 times more food on the same footprint as is possible in the field.

Currently, most vertical farms only produce greens, such as lettuce, herbs and microgreens, as they are quick and profitable, but within five years many more crops will be possible as the cost of lighting continues to fall and technology develops.

The controlled environments of vertical farms slash pesticide and herbicide use, can be carbon neutral and they recycle water. For both cold and hot climates where field production of tender crops is difficult or impossible, vertical agriculture promises an end to expensive and environmentally intensive imports, such as berries, small fruits and avocados from regions such as California.

Cellular agriculture, or the science of producing animal products without animals, heralds even bigger change. In 2020 alone, hundreds of millions of dollars flowed into the sector, and in the past few months, the first products have come to market.

This includes Brave Robot “ice cream” that involves no cows and Eat Just’s limited release of “chicken” that never went cluck.

Precision agriculture is another big frontier. Soon self-driving tractors will use data to plant the right seed in the right place, and give each plant exactly the right amount of fertilizer, cutting down on energy, pollution and waste.

Taken together, vertical, cellular and precision farming should allow us the ability to produce more food on less land and with fewer inputs. Ideally, we will be able to produce any crop, anywhere, any time of year, eliminating the need for long, vulnerable, energy intensive supply chains.

Is agriculture 2.0 ready?

Of course, these technologies are no panacea — no technology ever is. For one thing, while these technologies are maturing rapidly, they aren’t quite ready for mainstream deployment. Many remain too expensive for small- and medium-sized farms and may drive farm consolidation.

Some consumers and food theorists are cautious, wondering why we can’t produce our food the way our great-grandparents did. Critics of these agricultural technologies call for agri-ecological or regenerative farming that achieves sustainability through diversified, small-scale farms that feed local consumers. Regenerative agriculture is very promising, but it isn’t clear it will scale.

A package of 'lab-grown' beef.
Could cultured meats become common in grocery stores in the next decade?
(Shutterstock)

While these are serious considerations, there is no such thing as a one-size-fits-all approach to food security. For instance, alternative small-scale mixed-crop farms also suffer labour shortages and typically produce expensive food that is beyond the means of lower-income consumers. But it doesn’t have to be an “either/or” situation. There are benefits and drawbacks to all approaches and we cannot achieve our climate and food security goals without also embracing agricultural technology.

Agriculture’s hopeful future

By taking the best aspects of alternative agriculture (namely the commitment to sustainability and nutrition), the best aspects of conventional agriculture (the economic efficiency and the ability to scale) and novel technologies such as those described above, the world can embark on an agricultural revolution that — when combined with progressive policies around labour, nutrition, animal welfare and the environment — will produce abundant food while reducing agriculture’s footprint on the planet.




Read more:
Diet resolutions: 6 things to know about eating less meat and more plant-based foods


This new approach to agriculture, a “closed-loop revolution,” is already blooming in fields (and labs) from advanced greenhouses of the Netherlands and the indoor fish farms of Singapore to the cellular agriculture companies of Silicon Valley.

Cucumber plants growing indoors
Hydroponic cucumbers can be grown indoors with LED lights.
(Lenore Newman)

Closed-loop farms use little pesticide, are land and energy efficient, and recycle water. They can allow for year-round local production, reduce repetitive hand labour, improve environmental outcomes and animal welfare. If these facilities are matched with good policy, then we should see the land not needed for farming be returned to nature as parks or wildlife refuges.

Today’s world was shaped by an agricultural revolution that began ten thousand years ago. This next revolution will be just as transformative. COVID-19 may have put the problems with our food system on the front page, but the long-term prospect for this ancient and vital industry is ultimately a good news story.

Tags: #technologies #poised #change #food #planet

Written by Lenore Newman, Canada Research Chair, Food Security and the Environment, University of The Fraser Valley

This article by Lenore Newman, Canada Research Chair, Food Security and the Environment, University of The Fraser Valley, originally published on The Conversation is licensed under Creative Commons 4.0 International(CC BY-ND 4.0).

Has Earth been visited by an alien spaceship? Harvard professor Avi Loeb vs everybody else

A highly unusual object was spotted travelling through the solar system in 2017. Given a Hawaiian name,ʻOumuamua, it was small and elongated – a few hundred metres by a few tens of meters, travelling at a speed fast enough to escape the Sun’s gravity and move into interstellar space.

I was at a meeting when the discovery of ʻOumuamua was announced, and a friend immediately said to me, “So how long before somebody claims it’s a spaceship?” It seems that whenever astronomers discover anything unusual, somebody claims it must be aliens.

Nearly all scientists believe that ʻOumuamua probably originates from outside the solar system. It is an asteroid- or comet-like object that has left another star and travelled through interstellar space – we saw it as it zipped by us. But not everyone agrees. Avi Loeb, a Harvard professor of astronomy, suggested in a recent book that it is indeed an alien spaceship. But how feasible is this? And how come most scientists disagree with the claim?

Researchers estimate that the Milky Way should contain around 100 million billion billion comets and asteroids ejected from other planetary systems, and that one of these should pass through our solar system every year or so. So it makes sense that ‘Oumuamua could be one of these. We spotted another last year – “Borisov” – which suggests they are as common as we predict.

What made ʻOumuamua particularly interesting was that it didn’t follow the orbit you would expect – its trajectory shows it has some extra “non-gravitational force” acting on it. This is not too unusual. The pressure of solar radiation or gas or particles driven out as an object warms up close to the Sun can give extra force, and we see this with comets all the time.

Experts on comets and the solar system have explored various explanations for this. Given this was a small, dark object passing us very quickly before disappearing, the images we were able to get weren’t wonderful, and so it is difficult to be sure.

Image of Avi Loeb.
Avi Loeb.
wikipedia, CC BY-SA

Loeb, however, believes that ʻOumuamua is an alien spaceship, powered by a “light sail” – a method for propelling a spacecraft using radiation pressure exerted by the Sun on huge mirrors. He argues the non-gravitational acceleration is a sign of “deliberate” manoeuvring. This argument seems largely to be based on the fact that ʻOumuamua lacks a fuzzy envelope (“coma”) and a comet-like tail, which are usual signatures of comets undergoing non-gravitational acceleration (although jets from particular spots cannot be ruled out).

Sanity checks

He may or may not be right, and there is no way of proving or disproving this idea. But claims like this, especially from experienced scientists are disliked by the scientific community for many reasons.

If we decide that anything slightly odd that we don’t understand completely in astronomy could be aliens, then we have a lot of potential evidence for aliens – there is an awful lot we don’t understand. To stop ourselves jumping to weird and wonderful conclusions every time we come across something strange, science has several sanity checks.

One is Occam’s razor, which tells us to look for the simplest solutions that raise the fewest new questions. Is this a natural object of the type that we suspect to be extremely common in the Milky Way, or is it aliens? Aliens raise a whole set of supplementary questions (who, why, from where?) which means Occam’s razor tells us to reject it, at least until all simpler explanations are exhausted.

Another sanity check is the general rule that “extraordinary claims require extraordinary evidence”. A not quite completely understood acceleration is not extraordinary evidence, as there are many plausible explanations for it.

Yet another check is the often sluggish but usually reliable peer-review system, in which scientists publish their findings in scientific journals where their claims can be assessed and critiqued by experts in their field.

Alien research

This doesn’t mean that we shouldn’t look for aliens. A lot of time and money is being devoted to researching them. For astronomers who are interested in the proper science of aliens, there is “astrobiology” – the science of looking for life outside Earth based on signs of biological activity. On February 18, Nasa’s Perseverance rover will land on Mars and look for molecules which may include such signatures, for example. Other interesting targets are the moons of Jupiter and Saturn.

Image of Jupiter's moon Europa.
Jupiter’s moon Europa may harbour simple life in its internal ocean.
NASA/JPL/DLR, CC BY-SA

In the next five years, we will also have the technology to search for alien life on planets around other stars (exoplanets). Both the James Webb Space Telescope (due to launch in 2021), and the European Extremely Large Telescope (due for first light in 2025) will analyse exoplanet atmospheres in detail, searching for signs of life. For example, the oxygen in the Earth’s atmosphere is there because life constantly produces it. Meanwhile, the Search for Extraterrestrial Intelligence (Seti) initiative has been scanning the skies with radio telescopes for decades in search of messages from intelligent aliens.

Signs of alien life would be an amazing discovery. But when we do find such evidence, we want to be sure it is good. To be as sure as we can be, we need to present our arguments to other experts in the field to examine and critique, follow the scientific method which, in its slow and plodding way, gets us there in the end.

This would give us much more reliable evidence than claims from somebody with a book to sell. It is quite possible, in the next five to ten years, that somebody will announce that they have found good evidence for alien life. But rest assured this isn’t it.

Tags: #Earth #visited #alien #spaceship #Harvard #professor #Avi #Loeb

Written by Simon Goodwin, Professor of Theoretical Astrophysics, University of Sheffield

This article by Simon Goodwin, Professor of Theoretical Astrophysics, University of Sheffield, originally published on The Conversation is licensed under Creative Commons 4.0 International(CC BY-ND 4.0).

6 important truths about COVID-19 vaccines

One of the biggest barriers standing in the way of ending the pandemic isn’t medical or logistical. It’s the misinformation about the COVID-19 vaccines.

Demand for vaccine currently exceeds supply, but there are many people who are either unsure whether they should take the vaccine or staunchly against it. This is often because they have heard incorrect information about the vaccine or its effects.

Many experts estimate that between 70% and 90% of the population must be vaccinated to block the spread of the virus and reach herd immunity, which occurs when enough individuals are immune to a disease that it prevents its spread. If the American population is to achieve heard immunity, it is important to start dispelling myths so that when there is widespread access to the vaccine, people will not hesitate to get their shot.

We are an immunologist and pharmacist. Here are some of the facts behind some of the common myths that we have heard about the COVID-19 mRNA vaccines from patients, friends and family members.

Fact: Vaccines were rigorously tested and found to be safe

The mRNA technology that was used in the Pfizer/BioNTech and Moderna vaccines has existed for more than a decade and is not new in the vaccine development field. Moreover, the approved mRNA vaccines have undergone rigorous testing and clinical trials demonstrating safety and efficacy in people.

More than 90,000 people volunteered for these vaccine trials. The Pfizer-BioNTech vaccine reduced disease by 95% and the Moderna vaccine reduced disease by 94% after volunteers completed two doses. The development, clinical trials and approval occurred faster than seen with previous vaccines. There are several reasons.

First, mRNA technology has been studied for other viral diseases – Zika virus, rabies virus, respiratory syncytial virus – for the past few years. Scientists were able to apply this familiar technology to the SARS-CoV-2 virus immediately after its discovery.

Second, funding and partnerships from government and private firms allowed many of the clinical trial phases to occur in parallel, rather than in series, which is typical testing design. This significantly sped up the process.

Third, most of the costly and time-consuming part of vaccine development is scaling up manufacturing and commercial production, and ensuring quality control. This typically happens after phase 3 efficacy trials have been completed. Because of the urgency of the COVID-19 pandemic, manufacturing and commercial-scale production of these vaccines started at the same time as the human safety clinical trials. This meant that once the vaccines were proved safe and effective there was a large stockpile ready to distribute to the public.

Fact: Vaccines have no effect on recipients’ genetic material

DNA is located inside the nucleus of a cell. The messenger RNA, or mRNA, delivered from the vaccines enters the cell but not the nucleus. The mRNA instructions are used to manufacture the spike protein, which the body recognizes as not belonging, and this evokes an immune response. After being read, these mRNA vaccine molecules degrade quickly through normal cellular processes.

COVID-19 mRNA vaccines produce only the spike protein and can’t produce the enzymes that facilitate the host-cell integration. Therefore, chances of altering host DNA are highly unlikely.

Fact: The mRNA vaccines cannot give you COVID-19

The mRNA vaccines cannot cause disease because they do not contain a live virus.

Most people have mild side effects like arm pain, aches, chills and fever after vaccination. These symptoms are the expected and healthy reactions to the vaccine and often subside in few days.

There have also been some reports of more serious side effects. As of Jan. 18, rates of anaphylaxis – a potentially life-threatening allergic reaction – were 1 in 212,000 in those who received the Pfizer vaccine and 1 in 400,000 in those who received the Moderna vaccine. No one has died from anaphylaxis. There have been reports of death but they do not appear to be due to the vaccine. These deaths have occurred mainly in elderly individuals, a population with higher mortality rates. These deaths are all being investigated, but at this point they are being attributed to underlying conditions.

One thing to keep in mind is that as more individuals are vaccinated, there will be more cases of incidental illness. These are illnesses that would be expected to occur at a certain rate in a large population, but may not be related to receiving the vaccine.

file 20210216 16 zdze7t.jpg?ixlib=rb 1.1
A health worker administers a dose of the Pfizer-BioNtech COVID-19 vaccine to a pregnant woman in Israel on Jan. 23.
Jack Guez/AFP via Getty Images

Fact: Pregnant or breastfeeding women can safely choose to be vaccinated

The CDC states that pregnant or breastfeeding patients may choose to be vaccinated if eligible.

Women who were pregnant or breastfeeding were excluded from the initial trials, which prompted the World Health Organization to initially recommend vaccinating only in high-risk pregnant or breastfeeding individuals.

This controversial stance was reversed after pushback from major maternal health organizations, including the American College of Obstetricians and Gynecologists and Society for Maternal-Fetal Medicine, which pointed out that risk of COVID-19 is greater in pregnant populations.

Because the data is limited, professional societies and organizations have been slow to make a clear recommendation despite experts agreeing that the risk of COVID-19 infection outweighs any potential and theoretical risks of vaccination.

Preliminary animal studies showed no harmful effects and, to date, there have been no reports of harm to the fetus or issues with development]from either mRNA vaccine. Individuals who have questions should speak to their health care provider, but a consultation or approval is not required for vaccination.

Fact: COVID-19 vaccines have no effect on fertility

Some individuals are concerned that the COVID-19 vaccinations may cause infertility, which is not true. This myth originated because a short sequence of amino acids that make up the spike protein of SARS-CoV-2 – necessary to infect human cells – is also shared with a protein called syncytin that is present in the placenta, a vital organ in fetal development.

However, the sequence similarity is too short to trigger a dangerous immune reaction that will give rise to infertility, according to experts who study these proteins.

Additionally, there are records of successful pregnancy after infection with SARS-CoV-2, with no evidence of increased miscarriages occurring in early pregnancy. The immune response to the virus doesn’t appear to affect fertility. While pregnant people were excluded from the vaccine trials, 23 Pfizer/BioNTech trial participants became pregnant after receiving the vaccine and there were no miscarriages in those who received the vaccine. Although a small number compared with the more than 40,000 individuals enrolled in the study, it adds to the evidence that there is no need for concern about infertility.

Fact: Those who’ve had COVID-19 will benefit from vaccination

Antibodies from COVID-19 infection are estimated to last approximately two to four months, so those who have had a previous infection should still get vaccinated.

The CDC states that individuals who have had COVID-19 infection may choose to wait 90 days after infection because it is expected that they will be protected by the natural antibodies for that three-month period. However, it is safe to get the vaccine as soon as the quarantine period has ended. Those who received monoclonal antibodies, which are synthetic antibodies manufactured in a lab, should wait for at least 90 days before getting the vaccine.

With new information being released daily and recommendations changing rapidly, it is difficult to keep up. It’s critical that accurate facts about the COVID-19 vaccines are circulated widely so that anyone can access the information needed to make an educated decision.

[Research into coronavirus and other news from science Subscribe to The Conversation’s new science newsletter.]

Tags: #important #truths #COVID19 #vaccines

Written by Sarah Lynch, Director of Skills Education and Clinical Assistant Professor of Pharmacy Practice, Binghamton University, State University of New York

This article by Sarah Lynch, Director of Skills Education and Clinical Assistant Professor of Pharmacy Practice, Binghamton University, State University of New York, originally published on The Conversation is licensed under Creative Commons 4.0 International(CC BY-ND 4.0).

we’re better able to improve Australian lives than before

The United States Post Office has just announced the 33rd stamp in its literary arts series – a striking image of novelist and essayist Ursula Le Guin.

Behind the portrait is artwork depicting a scene from The Left Hand of Darkness, Le Guin’s 1969 novel. It features the Gethenians, a species which is generically asexual, but randomly become male or female during estrus.

file 20210216 18 3yikwb.jpg?ixlib=rb 1.1

Le Guin predicted such a society would avoid any gender-specific roles and invent shared ways to raise children. The radical premise of the novel is captured in its best-known line – the king was pregnant.

Such an idea emphasises the role of chance in our lives. We are born into bodies, families, health, ethnicities and societies we did not choose.

These accidents shape much about the lives that follow, inviting us to consider how we treat those born into extreme disadvantage – what are our obligations to counterbalance the misfortunes created by the lottery?

Try applying the veil of ignorance test

Le Guin was replaying, with characteristic subtlety, an ancient debate about whether to accept unfairness as inevitably — “the poor you will always have with you” – or use institutions to create a more equitable distribution of life chances.

Her novel anticipated by a few years what may be the most famous thought experiment in modern philosophy — the veil of ignorance.

Imagine you can shape the laws of the nation but must make the decisions before knowing anything about your identity. You might turn out to be poor or rich, able-bodied or infirm. Gendered or not, young or old, talented or less gifted — all these conditions are hidden as you decide the rules of your society.

In his 1971 book A Theory of Justice, philosopher John Rawls suggested people behind such a veil would value justice above all.

Ignorance and self-interest dictate fairness

When identity is veiled, self-interest dictates equality. Slave owners would think carefully about supporting servitude if they might become slaves. As Abraham Lincoln is said to have asked when addressing claims that slavery was justified – what is this good thing that no man wants for himself?

Rawls sees two principles as essential to fairness: liberty, so we can make our own choices as long they do not harm others, and acceptance of difference so that opportunities are open to all, regardless of circumstances. The world should not be carved out only for the clever and the already fortunate.




Read more:
Disability and single parenthood loom large in inherited poverty


The prescription has been much debated. Some see the cost of such equality as too high. Others object to a notion of justice defined around material goods, noting that inequality is only about wealth and not also about power, respect, voice and control.

The radical implications of Rawlsian justice are not apparent in the ways Australia and other nations deal with poverty in their midst.

Australia has never been particularly fair

The Melbourne Institute’s long sequence of HILDA household income and labour dynamics surveys point to a strong correlation between poverty in childhood and poverty in adult life – a situation where poverty begets poverty.

Australian rates of poverty slightly exceed OECD averages, meaning the poor are indeed always with us.

Our response to poverty began with private charity, but early in the twentieth century moved to government programs, principally payments. Senior and invalid pensions began in 1909 followed by unemployment benefits, pensions for veterans, and support for mothers and children and health after World War I.




Read more:
Land of the ‘fair go’ no more: wealth in Australia is becoming more unequal


Australia has never pursued a substantial redistribution of wealth. Benefit payments remain modest and usually means-tested.

Elections consistently suggest Australians are comfortable with limits to public generosity; we choose governments that tax and spend slightly more than in the United States but a good deal less than in Europe. As a result, many Australians grow up in poverty, and a significant number pass it on to their children.

Even a single year in poverty during childhood harms likely income in adulthood, and the longer someone is in poverty in childhood the less chance they have of escaping poverty in adulthood.

A long-tailed lottery

This makes the lottery of birth a lifelong inheritance, with consequences for access to education, health, employment and social capital.

For those who find Rawls too confronting, there are other ways of thinking about our responsibilities toward those less fortunate. One is to look through the lens of what Nobel Laureate economist Amartya Sen labels “capability”, an approach at one time adopted by Australia’s treasury.

Capability is the powerful idea that each citizen should be equipped to lead a life they have reason to value. Investment through public provision – schools, hospitals, pensions and so on – is part of ensuring capability.

Practically, this might mean a national disability insurance scheme rather than a pension. But improvements through public investment can be hard to deliver in practice. Capability means recognising and responding to individual needs, but personalising services is expensive and sometimes problematic. “Why are you treating me differently from others?” is a reasonable question.

New Zealand points to a way out

file 20210216 18 yfazrj.jpg?ixlib=rb 1.1
Former New Zealand Prime Minister Bill English made targets deliberately hard.
Nick Perry/AP

Traditional public administration emphasises equal treatment for all and so employs standardised models and allocation principles.

By dealing with everyone in the same way, government agencies achieve technical (Rawls-like) fairness but not always the right (Sen-like) response to individual circumstances.

Yet there are encouraging signs of greater flexibility. New Zealand is using performance indicators based around people.

Jacinda Ardern’s predecessor as prime minister, National Party leader Bill English, reshaped his cabinet’s mission statements to focus on individuals.

In assigning ministers a shared requirement to reduce the number of assaults on children, he chose a target he said was “deliberately designed to be difficult”.

It would “require significant focus on the customer by multiple agencies”.

Similar thinking influenced Our Public Service Our Future, the 2019 review of the Australian Public Service led by David Thodey. It embraced the idea of using technology to personalise responses, and commended recent state and federal initiatives to coordinate services around each citizen.

I expand on this in On Life’s Lottery, published last month by Hachette, examining ways in which communities, governments and charities can work together to improve choices for people born into disadvantage.

Reflections on our obligations to others, and pathways to tailor support, are central to an essential goal: how we ensure the ticket we get at birth does not solely determine the journey ahead.

Tags: #improve #Australian #lives

Written by Glyn Davis, Distinguished Professor of Political Science, Crawford School of Public Policy, Australian National University

This article by Glyn Davis, Distinguished Professor of Political Science, Crawford School of Public Policy, Australian National University, originally published on The Conversation is licensed under Creative Commons 4.0 International(CC BY-ND 4.0).

a look at the technology Perseverance will need to survive landing on Mars

This month has been a busy one for Mars exploration. Several countries sent missions to the red planet in June last year, taking advantage of a launch window. Most have now arrived after their eight-month voyage.

Within the next few days, NASA will perform a direct entry of the Martian atmosphere to land the Perseverance rover in Mars’s Jezero Crater.




Read more:
As new probes reach Mars, here’s what we know so far from trips to the red planet


Perseverance, about the size of a car, is the largest Mars payload ever — it literally weighs a tonne (on Earth). After landing, the rover will search for signs of ancient life and gather samples to eventually be returned to Earth.

The mission will use similar hardware to that of the 2012 Mars Science Laboratory (MSL) mission, which landed the Curiosity rover, but will have certain upgrades including improved rover landing accuracy.

Curiosity’s voyage provided a wealth of information about what kind of environment Mars 2020 might face and what technology it would need to survive.

file 20210211 17 8o2rt7.jpg?ixlib=rb 1.1
An artist’s impression of Mars 2020 approaching the red planet.
NASA/JPL-Caltech

Mars: a most alien land

As Mars is a hostile and remote environment with an atmosphere about 100 times thinner than Earth’s, there’s little atmosphere for incoming spacecraft to use to slow down aerodynamically.

Rather, surviving entry to Mars requires a creative mix of aerodynamics, parachutes, retropropulsion (using engine thrust to decelerate for landing) and often a large airbag.

Also, models of Martian weather aren’t updated in real time, so we don’t know exactly what environment a probe will face during entry. Unpredictable weather events, especially dust storms, are one reason landing accuracy has suffered in previous missions.




Read more:
Mars missions from China and UAE are set to go into orbit – here’s what they could discover


NASA engineers call the entry, descent and landing phase (EDL) of Mars entry missions the “seven minutes of terror”. In just seven minutes there are myriad ways entry can fail.

file 20210210 17 1hbovn2.jpeg?ixlib=rb 1.1
A profile of Mars 2020’s entry, descent and landing phase.
NASA JPL

Thermal protection

The 2012 MSL spacecraft was fitted with a 4.5-metre-diameter heat shield that protected the vehicle during its descent through Mars’s atmosphere.

It entered the Martian atmosphere at around 5,900m per second. This is hypersonic, which means it’s more than five times the speed of sound.

Mars 2020 will be similar. It will rely heavily on its thermal protection system, including a front heat shield and backshell heat shield, to stop hot flow from damaging the rover stowed inside.

file 20210210 19 pgu6a1.jpg?ixlib=rb 1.1
Pictured are the Mars 2020 backshell heat shield (foreground) and the main PICA heat shield (background).
NASA/JPL-Caltech

At hypersonic speeds, Mars’s atmosphere won’t be able to get out of the spacecraft’s way fast enough. As a result, a strong shock wave will form off the front.

In this case, gas in front of the vehicle will be rapidly compressed, causing a huge jump in pressure and temperature between the shock wave and the heat shield.

The hot post-shock flow heats up the surface of the heat shield during the entry, but the heat shield protects the internal structure from this heat.

Since the MSL 2012 and Mars 2020 missions use relatively larger payloads, these spacecrafts are at higher risk of overheating during the entry phase.

But MSL effectively circumvented this issue, largely thanks to a specially-designed heat shield which was the first Mars vehicle ever to make use of NASA’s Phenolic Impregnated Carbon Ablator (PICA) material.

This material, which the Mars 2020 spacecraft also uses, is made of chopped carbon-fibre embedded in a synthetic resin. It’s very light, can absorb immense heat and is an effective insulator.

Guided entry

All entries before the 2012 MSL mission had been unguided, meaning they weren’t controlled in real-time by a flight computer.

Instead, the spacecraft were designed to hit the Mars’s “entry interface” (125km above the ground) in a particular way, before landing wherever the Martian winds took them. With this came significant landing uncertainty.

file 20210211 23 zlbts1.jpg?ixlib=rb 1.1
This artist’s impression shows thrusters controlling the angle of the spacecraft during MSL 2012’s Mars entry. Mars 2020 will use the same technique.
NASA/JPL-Caltech

The area of landing uncertainty is called the landing ellipse. NASA’s 1970s Viking Mars missions had an estimated landing ellipse of 280x100km. But both MSL and now Mars 2020 were built to outperform previous efforts.

The MSL mission was the first guided Mars entry. An upgraded version of the Apollo guidance computer was used to control the vehicle in real time to ensure an accurate landing.

With this, MSL reduced its estimated landing ellipse to 20×6.5km and ended up landing just 2km from its target. With any luck, Mars 2020 will achieve similar results.

file 20210211 17 f6lrc1.jpg?ixlib=rb 1.1
Pictured are NASA’s various Mars landing sites, including the proposed Perseverance landing site. Perseverance is expected to land in a relatively less clear area.
NASA/JPL-Caltech

Supersonic parachuting

A parachute will be used to slow down the Mars 2020 spacecraft enough for final landing manoeuvres to be performed.

With a 21.5m diameter, the parachute will be the largest ever used on Mars and will have to be deployed faster than the speed of sound.

Deploying the parachute at the right time will be critical for achieving an accurate landing.

A brand new technology called “range trigger” will control the parachute’s deployment time, based on the spacecraft’s relative position to its desired landing spot.

file 20210210 23 14gy9gq.jpg?ixlib=rb 1.1
The spacecraft descending after the parachute has been deployed.
NASA/JPL-Caltech

State-of-the-art navigation

About 20 seconds after the parachute opens, the heat shield will separate from the spacecraft, exposing Perseverance to the Martian environment. Its cameras and sensors can begin to collect information as it approaches ground.

The rover’s specialised terrain-relative navigation system will help it land safely by diverting it to a stable landing surface.

Perseverance will compare a pre-loaded map of the landing site with images collected during its rapid descent. It should then be able to identify landmarks below and estimate its relative position to the ground to an accuracy of about 40m.

Terrain-relative navigation is far superior to methods used for past Mars entries. Older spacecraft had to rely on their own internal estimates of their location during entry.

And there was no way to effectively recalibrate this information. They could only guess where they were to an accuracy of about 2-3km as they approached ground.




Read more:
Mars InSight: why we’ll be listening to the landing of the Perseverance rover


The final touchdown

The parachute carrying the Mars 2020 spacecraft can only slow it down to about 320km per hour.

To land safely, the spacecraft will jettison the parachute and backshell and use rockets facing the ground to ease down for the final 2,100m. This is called “retropropulsion”.

And to avoid using airbags to land the rover (as was done in missions prior to MSL), Mars 2020 will use the “skycrane” manoeuvre; a set of cables will slowly lower Perseverance to the ground as it prepares for autonomous operation.

Once Perseverance senses its wheels are safely on the ground, it will cut the cables connected to the descent vehicle (which will fly off and crash somewhere in the distance).

And with that, the seven minutes of terror will be over.

file 20210211 21 1wk9vo4.jpg?ixlib=rb 1.1
Perseverance rover being placed on Martian soil by the skycrane.
NASA/JPL-Caltech

Tags: #technology #Perseverance #survive #landing #Mars

Written by Chris James, ARC DECRA Fellow, Centre for Hypersonics, School of Mechanical and Mining Engineering, The University of Queensland

This article by Chris James, ARC DECRA Fellow, Centre for Hypersonics, School of Mechanical and Mining Engineering, The University of Queensland, originally published on The Conversation is licensed under Creative Commons 4.0 International(CC BY-ND 4.0).

Graphene could one day be used to make quick, reliable tests for viruses like SARS-CoV-2

Graphene is a layer of carbon only one atom thick. Since it was first isolated in 2004, it has found applications in strengthening materials, accelerating electronics, and boosting performance in batteries, among others.

It also shows great potential for use in biosensors. These are devices used to detect small concentrations of biomarkers in biological samples, such as blood or saliva. Biomarkers are molecules that suggest the presence of disease.

In a recent review, my colleagues and I looked into the latest research to find the most exciting potential applications of graphene in point-of-care tests. This includes diagnostic tests for SARS-CoV-2, the virus responsible for COVID-19, but also detecting other viruses, bacteria and even cancerous tumours.

It’s early days. The technology still needs to go through clinical trials and processes need to be developed to manufacture these tests at scale. However, in the next five years, graphene could start to play a part in healthcare technology.

A single atom layer

Because graphene is a two-dimensional material, it has a tremendously high surface-to-volume ratio, which makes it very sensitive to changes in its environment. Think of a vast, calm lake. Any tiny pebble that hits the surface creates a ripple that quickly expands across the water.

Similarly, when other substances – even single molecules – hit graphene, they generate small, measurable electrical pulses.

Relying on this phenomenon alone to detect SARS-CoV-2 wouldn’t work. When used as a biosensing layer in electronic devices, graphene is sensitive down to a single molecule. Yet it can’t tell the difference between coronavirus and the flu – the same way the lake would confuse a pebble and a marble.

To solve this, researchers have developed chemically modified graphene, coating it with antibodies that bind specifically to SARS-CoV-2. When the virus reaches the sensor and attaches to the antibody, it triggers an electrical signal through the thin graphene layer.

Someone in scrubs with gloves on holding a test tube and cotton bud.
Current COVID tests involve extracting RNA from the virus.
Shutterstock/Photoroyalty

SARS-CoV-2 carries all its genetic information in a strand of RNA, which is often used in detection processes like a polymerase chain reaction (PCR). A PCR device amplifies the amount of RNA in a saliva sample until it becomes detectable under a microscope.

But this process is time consuming, requires expensive equipment and very specific and expensive reagents – substances used in the labelling and amplification process, or in the preparation of the sample.




Read more:
Why we need to test COVID-19 tests


A graphene-enabled device can detect the virus in swabs and other biological samples. It does so without any of the reagents or treatments used in other tests to extract the specific biomarker from the biological sample. It doesn’t require labelling with fluorescent probes, to allow detection by optical methods, as done in PCR-based tests.

The results are quick, taking only a few seconds compared with the hours it can take for PCR tests.

Thanks to the unique electrical properties, graphene-based biosensors could detect smaller amounts of the targeted biomarker than other sensors. Graphene sensors can detect one copy of the virus RNA in 10¹⁸ (one followed by 18 zeroes) litres of biological sample – almost like finding our pebble if it were lost in the Mediterranean Sea. For comparison, PCR tests require around 34 billion copies of the virus in 40ml of liquid.

Graphene sensors also meet the requirements for the World Health Organization’s requirements for efficient point-of-care sensors.

As well as speed and sensitivity, some graphene-based sensors bring other advantages – like being printable in a piece of paper. This means it should be simple to incorporate them into lateral-flow assays, a cost-effective technology commonly used in point-of-care diagnostics for both lab use and home testing in different areas, such as pregnancy tests and COVID-19 tests.

Beyond viruses

Featuring a mixture of graphene oxide, gold nanoparticles and antibodies, printed sensors can detect as little as ten proteins in one litre of sample. In particular, they detect a protein called CA125, a biomarker linked to ovarian, lung and breast cancers.

As well as detecting viruses, researchers have developed graphene sensors that detect harmful bacteria and tumour cells. These solutions could speed up the early detection fast-spreading infectious diseases, such as salmonella infections, E. coli contamination or cholera outbreaks.

Nowadays, detecting these pathogens relies on PCR methods – like the ones used for coronavirus – and other procedures based on counting bacteria colonies. The latter approaches are time consuming with low sensitivity, requiring large populations of bacteria in the biological sample. This is a problem when the presence of just one pathogen could be enough to spread the disease.

Graphene sensors overcome this hurdle and, at the same time, provide quick and reliable results. Among other devices, a graphene-based field-effect transistor can detect single cells of several E. coli strains in water and could be easily manufactured in large scales for areas that need a high volume of tests.

Researchers have also developed graphene-enabled devices that detect cancer biomarkers and probes that distinguish tumour cells in blood samples – potentially saving patients from invasive procedures like biopsies.

Graphene-based diagnostics has the potential to reach commercial viability. In parallel to studies investigating graphene sensors, advances in manufacturing techniques mean graphene and related materials can be produced at large scales to a high quality.

Graphene is gradually reaching commercial viability in multiple areas. The first wave of graphene-enabled products are now on the market and commercialisation activities are moving from materials development towards building components.

Based on the outcome of initial research results, there are some challenges to overcome before graphene-based point-of-care sensors reach the market, including fulfilling all the health and safety requirements and clinical trials. But several industries and spin-off companies are already interested. Graphene and related “layered materials” will become a key player in the future of medical technologies.

Tags: #Graphene #day #quick #reliable #tests #viruses #SARSCoV2

Written by Luigi G. Occhipinti, Director of Research in Graphene and Related Technologies, University of Cambridge

This article by Luigi G. Occhipinti, Director of Research in Graphene and Related Technologies, University of Cambridge, originally published on The Conversation is licensed under Creative Commons 4.0 International(CC BY-ND 4.0).

As NZ gets serious about climate change, can electricity replace fossil fuels in time?

As fossil fuels are phased out over the coming decades, the Climate Change Commission (CCC) suggests electricity will take up much of the slack, powering our vehicle fleet and replacing coal and gas in industrial processes.

But can the electricity system really provide for this increased load where and when it is needed? The answer is “yes”, with some caveats.

Our research examines climate change impacts on the New Zealand energy system. It shows we’ll need to pay close attention to demand as well as supply. And we’ll have to factor in the impacts of climate change when we plan for growth in the energy sector.

Demand for electricity to grow

While electricity use has not increased in NZ in the past decade, many agencies project steeply rising demand in coming years. This is partly due to both increasing population and gross domestic product, but mostly due to the anticipated electrification of transport and industry, which could result in a doubling of demand by mid-century.

The graph (below), based on a range of projections from various agencies, shows demand may increase by between 10TWh and 60TWh (Terawatt hours) by 2050. This is on top of the 43TWh of electricity currently generated per year to power the whole country.

SOURCES: Historical generation data, Meridian Energy Ltd, internal modelling, Transpower, MBIE, BEC, CCC.

It’s hard to get a sense of the scale of the new generation required, but if wind was the sole technology employed to meet demand by 2050, between 10 and 60 new wind farms would be needed nationwide.




Read more:
Power play: despite the tough talk, the closure of Tiwai Point is far from a done deal


Of course, we won’t only build wind farms. Grid-scale solar, rooftop solar, new geothermal, some new small hydro plant and possibly tidal and wave power will all have a part to play.

Several windmills on a hillside in New Zealand.
We will need more wind farms.
Shutterstock/JoshuaDaniel

Managing the demand

As well as providing more electricity supply, demand management and batteries will also be important. Our modelling shows peak demand (which usually occurs when everyone turns on their heaters and ovens at 6pm in winter) could be up to 40% higher by 2050 than it is now.

But meeting this daily period of high demand could see expensive plant sitting idle for much of the time (with the last 25% of generation capacity only used about 10% of the time).

This is particularly a problem in a renewable electricity system when the hydro lakes are dry, as hydro is one of the few renewable electricity sources that can be stored during the day (as water behind the dam) and used over the evening peak (by generating with that stored water).

Demand response will therefore be needed. For example, this might involve an industrial plant turning off when there is too much load on the electricity grid.




Read more:
How to cut emissions from transport: ban fossil fuel cars, electrify transport and get people walking and cycling


But by 2050, a significant number of households will also need smart appliances and meters that automatically use cheaper electricity at non-peak times. For example, washing machines and electric car chargers could run automatically at 2am, rather than 6pm when demand is high.

Our modelling shows a well set up demand response system could mitigate dry-year risk (when hydro lakes are low on water) in coming decades, where currently gas and coal generation is often used.

Instead of (or as well as) having demand response and battery systems to combat dry-year risk, a pumped storage system could be built. This is where water is pumped uphill when hydro lake inflows are plentiful, and used to generate electricity during dry periods.

The NZ Battery project is currently considering the potential for this in New Zealand.

Almost (but not quite) 100% renewable

Dry-year risk would be greatly reduced and there would be “greater greenhouse gas emissions savings” if the Interim Climate Change Committee’s (ICCC) 2019 recommendation to aim for 99% renewable electricity was adopted, rather than aiming for 100%.

A small amount of gas-peaking plant would therefore be retained. The ICCC said going from 99% to 100% renewable electricity by overbuilding would only avoid a very small amount of carbon emissions, at a very high cost.

Our modelling supports this view. The CCC’s draft advice on the issue also makes the point that, although 100% renewable electricity is the “desired end point”, timing is important to enable a smooth transition.

Despite these views, Energy Minister Megan Woods has said the government will be keeping the target of a 100% renewable electricity sector by 2030.

Megan Woods speaking in front of aluminium ingots.
Minister of Energy and Resources Megan Woods speaking at the Tiwai Point aluminium smelter, due to close in 2024.
GettyImages

Impacts of climate change

In future, the electricity system will have to respond to changing climate patterns as well. The National Institute of Water and Atmospheric Research predicts winds will increase in the South Island and decrease in the far north in coming decades.

Inflows to the biggest hydro lakes will get wetter (more rain in their headwaters), and their seasonality will change due to changes in the amount of snow in these catchments.

Our modelling shows the electricity system can adapt to those changing conditions. One good news story (unless you’re a skier) is that warmer temperatures will mean less snow storage at lower elevations, and therefore higher lake inflows in the big hydro catchments in winter, leading to a better match between times of high electricity demand and higher inflows.




Read more:
New Zealand wants to build a 100% renewable electricity grid, but massive infrastructure is not the best option


The price is right

The modelling also shows the cost of generating electricity is not likely to increase, because the price of building new sources of renewable energy continues to fall globally.

Because the cost of building new renewables is now cheaper than non-renewables (such as coal-fired plants), renewables are more likely to be built to meet new demand in the near term.

While New Zealand’s electricity system can enable the rapid decarbonisation of (at least) our transport and industrial heat sectors, certainty is needed in some areas so the electricity industry can start building to meet demand everywhere.

Bipartisan cooperation at government level will be important to encourage significant investment in generation and transmission projects with long lead times and life expectancies.

Infrastructure and markets are needed to support demand response uptake, as well as certainty around the Tiwai exit in 2024 and whether pumped storage is likely to be built.

Our electricity system can support the rapid decarbonisation needed if New Zealand is to do its fair share globally to tackle climate change.

But sound planning, firm decisions and a supportive and relatively stable regulatory framework are all required before shovels can hit the ground.

Tags: #climate #change #electricity #replace #fossil #fuels #time

Written by Jen Purdie, Senior Research Fellow, University of Otago

This article by Jen Purdie, Senior Research Fellow, University of Otago, originally published on The Conversation is licensed under Creative Commons 4.0 International(CC BY-ND 4.0).

The TV networks holding back the future

If I offered you money for something, an offer you didn’t have to accept, would you call it a grab?

What if I actually owned the thing I offered you money for, and the offer was more of a gentle inquiry?

Welcome to the world of television, where the government (which actually owns the broadcast spectrum) can offer networks the opportunity to hand back a part of it, in return for generous compensation, and get accused of a “spectrum grab”.

If the minister, Paul Fletcher, hadn’t previously worked in the industry (he was a director at Optus) he wouldn’t have believed it.

Here’s what happened. The networks have been sitting on more broadcast spectrum (radio frequencies) than they need since 2001.

That’s when TV went digital in order to free up space for emerging uses such as mobile phones.

Pre-digital, each station needed a lot of spectrum — seven megahertz, plus another seven (and at times another seven) for fill-in transmitters in nearby areas.

It meant that in major cities it took far more spectrum to deliver the five TV channels than Telstra plans to use for its entire 5G phone and internet work.

Digital meant each channel would only need two megahertz to do what it did before, a huge saving Prime Minister John Howard was reluctant to pick up.

His own department told him there were

better ways of introducing digital television than by granting seven megahertz of spectrum to each of the five free-to-air broadcasters at no cost when a standard definition service of a higher quality than the current service could be provided with around two megahertz

His Office of Asset Sales labelled the idea of giving them the full seven a

de facto further grant of a valuable public asset to existing commercial interests

Seven, Nine and Ten got the de facto grant, and after an uninspiring half decade of using it to broadcast little-watched high definition versions of their main channels, used it instead to broadcast little-watched extra channels with names like 10 Shake, 9Rush and 7TWO.

Micro-channels are better delivered by the internet

TV broadcasts are actually a good use of spectrum where masses of people need to watch the same thing at once. They use less of broadcast bandwidth than would the same number of streams delivered through the air by services such as Netflix.

But when they are little-watched (10 Shake got 0.4% of the viewing audience in prime time last week, an average of about 10,000 people Australia-wide) the bandwidth is much better used allowing people to watch what they want.




Read more:
Broad reform of FTA television is needed to save the ABC


It’s why the government is kicking community television off the air. Like 10 Shake, its viewers can be counted in thousands and easily serviced by the net.

The government’s last big auction of freed-up television spectrum in 2013 raised A$1.9 billion, and that was for leases, that expire in 2029.

Among the buyers were Telstra, Optus and TPG.

file 20210215 23 vr3cd3.jpg?ixlib=rb 1.1
The successful bidders for leases on vacated television spectrum in 2013.
Australian Communications and Media Authority

The money now on offer, and the exploding need for spectrum, is why last November Fletcher decided to have another go.

Rather than kick the networks off what they’ve been hogging (as he is doing with community TV) he offered them what on the face of it is an astoundingly generous deal.

Any networks that want to can agree to combine their allocations, using new compression technology to broadcast about as many channels as before from a shared facility, freeing up what might be a total of 84 megahertz for high-value communications. Any that don’t, don’t need to.

All the networks need to do is share

The deal would only go ahead if at least two commercial licence holders in each licence area signed up. At that point the ABC and SBS would combine their allocations and the commercial networks would be freed of the $41 million they currently pay in annual licence fees, forever.

That’s right. From then on, they would be guaranteed enough spectrum to do about what they did before, except for free, plus a range of other benefits

The near-instant reaction, in a letter signed by the heads of each of the regional networks, was to say no, they didn’t want to share. The plan was “simply a grab for spectrum to bolster the federal government’s coffers”.

And sharing’s not that hard

It’s as if the networks own the spectrum (they don’t) and it is not as if they are normally reluctant to share — they share just about everything.

For two decades they’ve shared their transmission towers, and for 18 months Nine and Seven have been playing out their programs from the same centre.

file 20210216 22 11ig87o.jpg?ixlib=rb 1.1
Nine’s soon-to-be-demolished tower in Sydney’s Willoughby broadcasts Seven, Nine and Ten.
Dean Lewins/AAP

That’s right. Nine and Seven use the same computers, same operators, same desks, to play programs.

One day it is entirely possible that a Seven promo or ad will accidentally go to air on Nine, just as a few years back some pages from the Sydney Morning Herald were accidentally printed in the Daily Telegraph, whose printing plants it makes use of.

All the minister is asking is for them to share something else, what Australia’s treasury describes as a “scarce resource of high value to Australian society”.

There’s a good case for going further, taking almost all broadcasting off the air and putting it online, or sending it out by direct-to-home satellite, removing the need for bandwidth-hogging fill-in transmitters.

Seven, Nine and Ten have yet to respond. Indications are they’re not much more positive than their regional cousins, although more polite. They’re standing in the way of progress.

Tags: #networks #holding #future

Written by Peter Martin, Visiting Fellow, Crawford School of Public Policy, Australian National University

This article by Peter Martin, Visiting Fellow, Crawford School of Public Policy, Australian National University, originally published on The Conversation is licensed under Creative Commons 4.0 International(CC BY-ND 4.0).

A tiny crystal device could boost gravitational wave detectors to reveal the birth cries of black holes

In 2017, astronomers witnessed the birth of a black hole for the first time. Gravitational wave detectors picked up the ripples in spacetime caused by two neutron stars colliding to form the black hole, and other telescopes then observed the resulting explosion.

But the real nitty-gritty of how the black hole formed, the movements of matter in the instants before it was sealed away inside the black hole’s event horizon, went unobserved. That’s because the gravitational waves thrown off in these final moments had such a high frequency that our current detectors can’t pick them up.




Read more:
At last, we’ve found gravitational waves from a collapsing pair of neutron stars


If you could observe ordinary matter as it turns into a black hole, you would be seeing something similar to the Big Bang played backwards. The scientists who design gravitational wave detectors have been hard at work to figure out how improve our detectors to make it possible.

Today our team is publishing a paper that shows how this can be done. Our proposal could make detectors 40 times more sensitive to the high frequencies we need, allowing astronomers to listen to matter as it forms a black hole.

It involves creating weird new packets of energy (or “quanta”) that are a mix of two types of quantum vibrations. Devices based on this technology could be added to existing gravitational wave detectors to gain the extra sensitivity needed.

file 20210211 17 q8esrc.jpg?ixlib=rb 1.1
An artist’s conception of photons interacting with a millimetre scale phononic crystal device placed in the output stage of a gravitational wave detector.
Carl Knox / OzGrav / Swinburne University, Author provided

Quantum problems

Gravitational wave detectors such as the Laser Interferometer Gravitational-wave Observatory (LIGO) in the United States use lasers to measure incredibly small changes in the distance between two mirrors. Because they measure changes 1,000 times smaller than the size of a single proton, the effects of quantum mechanics – the physics of individual particles or quanta of energy – play an important role in the way these detectors work.

Two different kinds of quantum packets of energy are involved, both predicted by Albert Einstein. In 1905 he predicted that light comes in packets of energy that we call photons; two years later, he predicted that heat and sound energy come in packets of energy called phonons.

Photons are used widely in modern technology, but phonons are much trickier to harness. Individual phonons are usually swamped by vast numbers of random phonons that are the heat of their surroundings. In gravitational wave detectors, phonons bounce around inside the detector’s mirrors, degrading their sensitivity.




Read more:
Australia’s part in the global effort to discover gravitational waves


Five years ago physicists realised you could solve the problem of insufficient sensitivity at high frequency with devices that combine phonons with photons. They showed that devices in which energy is carried in quantum packets that share the properties of both phonons and photons can have quite remarkable properties.

These devices would involve a radical change to a familiar concept called “resonant amplification”. Resonant amplification is what you do when you push a playground swing: if you push at the right time, all your small pushes create big swinging.

The new device, called a “white light cavity”, would amplify all frequencies equally. This is like a swing that you could push any old time and still end up with big results.

However, nobody has yet worked out how to make one of these devices, because the phonons inside it would be overwhelmed by random vibrations caused by heat.

Quantum solutions

In our paper, published in Communications Physics, we show how two different projects currently under way could do the job.

The Niels Bohr Institute in Copenhagen has been developing devices called phononic crystals, in which thermal vibrations are controlled by a crystal-like structure cut into a thin membrane. The Australian Centre of Excellence for Engineered Quantum Systems has also demonstrated an alternative system in which phonons are trapped inside an ultrapure quartz lens.

file 20210211 16 1girq3t.png?ixlib=rb 1.1
Artist’s impression of a tiny device that could boost gravitational wave detector sensitivity in high frequencies.
Carl Knox / OzGrav / Swinburne University, Author provided

We show both of these systems satisfy the requirements for creating the “negative dispersion” – which spreads light frequencies in a reverse rainbow pattern – needed for white light cavities.

Both systems, when added to the back end of existing gravitational wave detectors, would improve the sensitivity at frequencies of a few kilohertz by the 40 times or more needed for listening to the birth of a black hole.

What’s next?

Our research does not represent an instant solution to improving gravitational wave detectors. There are enormous experimental challenges in making such devices into practical tools. But it does offer a route to the 40-fold improvement of gravitational wave detectors needed for observing black hole births.

Astrophysicists have predicted complex gravitational waveforms created by the convulsions of neutron stars as they form black holes. These gravitational waves could allow us to listen in to the nuclear physics of a collapsing neutron star.

For example, it has been shown that they can clearly reveal whether the neutrons in the star remain as neutrons or whether they break up into a sea of quarks, the tiniest subatomic particles of all. If we could observe neutrons turning into quarks and then disappearing into the black hole singularity, it would be the exact reverse of the Big Bang where out of the singularity, the particles emerged which went on to create our universe.

Tags: #tiny #crystal #device #boost #gravitational #wave #detectors #reveal #birth #cries #black #holes

Written by David Blair, Emeritus Professor, ARC Centre of Excellence for Gravitational Wave Discovery, OzGrav, University of Western Australia

This article by David Blair, Emeritus Professor, ARC Centre of Excellence for Gravitational Wave Discovery, OzGrav, University of Western Australia, originally published on The Conversation is licensed under Creative Commons 4.0 International(CC BY-ND 4.0).

We found the first Australian evidence of a major shift in Earth’s magnetic poles. It may help us predict the next

About 41,000 years ago, something remarkable happened: Earth’s magnetic field flipped and, for a temporary period, magnetic north was south and magnetic south was north.

Palaeomagnetists refer to this as a geomagnetic excursion. This event, which is different to a complete magnetic pole reversal, occurs irregularly through time and reflects the dynamics of Earth’s molten outer core.

The strength of Earth’s magnetic field would have almost vanished during the event, called the Laschamp excursion, which lasted a few thousand years.

Earth’s magnetic field acts as a shield against high-energy particles from the Sun and outside the solar system. Without it the planet would be bombarded by these charged particles.

We don’t know when the next geomagnetic excursion will happen. But if it happened today, it would be crippling.

Satellites and navigation apps would be rendered useless — and power distribution systems would be disrupted at a cost of between US$7 billion and US$48 billion each day in the United States alone.

Obviously, satellites and electric grids didn’t exist 41,000 years ago. But the Laschamp excursion — named after the lava flows in France where it was first recognised — still left its mark.

We recently detected its signature in Australia for the first time, in a 5.5 metre-long sediment core taken from the bottom of Lake Selina, Tasmania.

Within these grains lay 270,000 years of history, which we unpack in our paper published in the journal Quaternary Geochronology.




Read more:
Explainer: what happens when magnetic north and true north align?


How sediment can record Earth’s magnetic field

Rock and soil can naturally contain magnetic particles, such as the iron mineral magnetite. These magnetic particles are like tiny compass needles aligned with Earth’s magnetic field.

They can be carried from the landscape into lakes through rainfall and wind. They eventually accumulate on the lake’s bottom, becoming buried and locking in place. They effectively become a fossil record of Earth’s magnetic field.

Scientists can then drill into lake beds and use a device called a magnetometer to recover the information held by the lake sediment. The deeper we drill, the further back in time we go.

In 2014 my colleagues and I travelled to Lake Selina in Tasmania with the goal of extracting the area’s climate, vegetation and “paleomagnetic” record, which is the record of Earth’s magnetic field stored in rocks, sediment and other materials.

Led by University of Melbourne Associate Professor Michael-Shawn Fletcher, we drilled into the lake floor from a makeshift floating platform rigged to two inflatable rafts.

Lake Selina, Tasmania.
Lake Selina is a small sub-alpine lake located near the west coast of Tasmania. Sediment from the lake was sampled in the form of 2x2cm cubes, each containing a few hundred years’ worth of magnetic field history.
Michael-Shawn Fletcher, Author provided

The first Australian evidence of Laschamp

Our dating of the core revealed that the biggest shift in magnetic pole positions and the lowest magnetic field intensity at Lake Selina both occurred during the Laschamp excursion.

But for a core that spanned several glacial periods, no single dating method could be trusted to precisely determine its age. So we employed numerous scientific techniques including radiocarbon dating and beryllium isotope analysis.

The latter involves tracking the presence of an isotope called beryllium-10. This is formed when high-energy cosmic particles bombard Earth, colliding with oxygen and nitrogen atoms in the atmosphere.




Read more:
New evidence for a human magnetic sense that lets your brain detect the Earth’s magnetic field


Since a weaker magnetic field leads to more of these charged particles bombarding Earth, we expected to find more beryllium-10 in sediment containing magnetic particles “locked-in” during the Laschamp excursion. Our findings confirmed this.

The interaction between charged cosmic particles and air particles in Earth’s atmosphere is also what creates auroras. Several generations of people would have witnessed a plethora of spectacular auroras during the Laschamp excursion.

Aurora borealis over the Gulf of Finland.
The interaction between charged cosmic particles and the highest air particles in Earth’s atmosphere is what creates auroras. During the Laschamp excursion, several generations of people would have witnessed a plethora of spectacular auroras.
Shutterstock

Building on work from the 1980s

Only two other lakes in Australia — Lake Barrine and Lake Eacham in Queensland — have provided a “full-vector” record, wherein both the past directions and past intensity of the magnetic field are obtained from the same core.

But at 14,000 years old, the records from these lakes are much younger than the Laschamp excursion. Four decades later, our work at Lake Selina with modern techniques has revealed the exciting potential for similar research at other Australian lakes.

Currently, Australia is considered a paleomagnetic “blind spot”.

Stalactites hang from cave ceiling.
‘Speleothems’ such as stalactites (pictured) and stalagmites are mineral deposits that form in caves.
Shutterstock

More data from lake sediments, archaeological artefacts, lava flows and mineral cave formations, including stalagmites and stalactites, could greatly improve our understanding of Earth’s magnetic field.

With this knowledge, we may one day potentially be able to predict the next geomagnetic excursion, before our phones stop working and the birds overhead veer off-course and crash into windows.

Our dating of the Lake Selina core is just the start. We’re sure there are more secrets embedded beneath, waiting to be found. And so we continue our search.


This work was carried out in collaboration with La Trobe University, the Australian National University, The University of Wollongong, the Australian Nuclear Science and Technology Organisation and the European Centre for Research and Teaching in Environmental Geosciences (CEREGE).

Tags: #Australian #evidence #major #shift #Earths #magnetic #poles #predict

Written by Agathe Lise-Pronovost, McKenzie Research Fellow in Earth Sciences, University of Melbourne

This article by Agathe Lise-Pronovost, McKenzie Research Fellow in Earth Sciences, University of Melbourne, originally published on The Conversation is licensed under Creative Commons 4.0 International(CC BY-ND 4.0).

Bendable concrete and other CO2-infused cement mixes could dramatically cut global emissions

One of the big contributors to climate change is right beneath your feet, and transforming it could be a powerful solution for keeping greenhouse gases out of the atmosphere.

The production of cement, the binding element in concrete, accounted for 7% of total global carbon dioxide emissions in 2018. Concrete is one of the most-used resources on Earth, with an estimated 26 billion tons produced annually worldwide. That production isn’t expected to slow down for at least two more decades.

Given the scale of the industry and its greenhouse gas emissions, technologies that can reinvent concrete could have profound impacts on climate change.

As engineers working on issues involving infrastructure and construction, we have been designing the next generation of concrete technology that can reduce infrastructure’s carbon footprint and increase durability. That includes CO2-infused concrete that locks up the greenhouse gas and can be stronger and even bendable.

The industry is ripe for dramatic change, particularly with the Biden administration promising to invest big in infrastructure projects and cut U.S. emissions at the same time. However, to put CO2 to work in concrete on a wide scale in a way that drastically cuts emissions, all of its related emissions must be taken into account.

Rethinking concrete

Concrete is made up of aggregate materials – primarily rocks and sand – along with cement and water.

Because about 80% of concrete’s carbon footprint comes from cement, researchers have been working to find substitute materials.

Industrial byproducts such as iron slag and coal fly ash are now frequently used to reduce the amount of cement needed. The resulting concrete can have significantly lower emissions because of that change. Alternative binders, such as limestone calcined clay, can also reduce cement use. One study found that using limestone and calcinated clay could reduce emissions by at least 20% while also cutting production costs.

Apart from developing blended cements, researchers and companies are focusing on ways to use captured CO2 as an ingredient in the concrete itself, locking it away and preventing it from entering the atmosphere. CO2 can be added in the form of aggregates – or injected during mixing. Carbonation curing, also known as CO2 curing, can also be used after concrete has been cast.

These processes turn CO2 from a gas to a mineral, creating solid carbonates that may also improve the strength of concrete. That means structures may need less cement, reducing the amount of related emissions. Companies such as CarbonCure and Solidia have developed technologies to use these processes for concrete poured at construction sites and in precast concrete, such as cinder blocks and other construction materials.

Illustration of CO2 storage possibilities in concrete
Carbon dioxide can make up a significant percentage of concrete mass.
Lucca Henrion/University of Michigan, CC BY-ND
The Kitahama building
The Kitahama building, the tallest residential tower in Japan, is built with bendable concrete for earthquake resistance.
MC681/Wikimedia Commons

At the University of Michigan, we are working on composites that produce a bendable concrete material that allows thinner, less brittle structures that require less steel reinforcement, further reducing related carbon emissions. The material can be engineered to maximize the amount of CO2 it can store by using smaller particles that readily react with CO2, turning it to mineral.

The CO2-based bendable concrete can be used for general buildings, water and energy infrastructure, as well as transportation infrastructure. Bendable concrete was used in the 61-story Kitahama tower in Osaka, Japan, and roadway bridge slabs in Ypsilanti, Michigan.

The challenge of lifecycle emissions

These cutting-edge technologies can start addressing concrete infrastructure’s carbon footprint, but barriers still exist.

In a study published Feb. 8, three of us looked at the lifecycle emissions from infusing CO2 into concrete and found that estimates did not always account for emissions from CO2 capture, transportation and use. With colleagues, we came up with strategies for ensuring that carbon curing has a strong emissions benefit.

Overall, we recommend developing a standard CO2 curing protocol. Lab experiments show that CO2 curing can improve concrete’s strength and durability, but results vary with specific curing procedures and concrete mixes. Research can improve the conditions and the timing of steps in the curing process to increase concrete’s performance. Electricity use – the largest emissions source during curing – can also be reduced by streamlining the process and possibly by using waste heat.

[Deep knowledge, daily. Sign up for The Conversation’s newsletter.]

Advanced concrete mixes, bendable concrete in particular, already begin to address these issues by increasing durability.

Merging infrastructure and climate policy

In 2020, a wide range of companies announced steps to reduce their emissions. However, government investment and procurement policies are still needed to transform the construction industry.

Local governments are taking the first steps. “Low embodied carbon concrete” rules and projects to reduce the amount of cement in concrete have cropped up around the country, including in Marin County, California; Hastings-on-Hudson, New York; and a sidewalk pilot in Portland, Oregon.

In New York and New Jersey, lawmakers have proposed state-level policies that would provide price discounts in the bidding process to proposals with the lowest emissions from concrete. These policies could serve as a blueprint for reducing carbon emissions from concrete production and other building materials.

Degraded concrete and exposed rebar on a bridge
A lot of North American infrastructure is in a state of disrepair.
Achim Herring/Wikimedia Commons, CC BY

Nationally, the crumbling of federally managed infrastructure has been a steadily growing crisis. The Biden administration could start to address those problems, as well as climate change, and create jobs through a strategic infrastructure program.

Secretary of Transportation Pete Buttigieg recently declared that there were “enormous opportunities for job creation, equity and climate achievement when it comes to advancing America’s infrastructure.” Policies that elevate low-carbon concrete to a nationwide climate solution could follow.

Tags: #Bendable #concrete #CO2infused #cement #mixes #dramatically #cut #global #emissions

Written by Lucca Henrion, Research Fellow at the Global CO2 Initiative, University of Michigan

This article by Lucca Henrion, Research Fellow at the Global CO2 Initiative, University of Michigan, originally published on The Conversation is licensed under Creative Commons 4.0 International(CC BY-ND 4.0).

Amsterdam ousts London as Europe’s top share hub, taking trading back to where it all began

Amsterdam has usurped London to become Europe’s biggest hub for trading shares. It is quite a shift for a city that was fifth behind Paris, Frankfurt and Milan only two months ago, while for the UK this is one of numerous recent developments that highlight the downsides of leaving the EU. So how did this happen, and where do things go from here?

First a bit of history. At the beginning of the 17th century, the financial centre of the world was not London, New York or Tokyo. It was an exchange building built by merchants on the River Amstel, Amsterdam. This was the time of the Dutch Golden Age, when its science, culture and commerce were among the most celebrated in the world.

While share certificates had first been issued in 1288 when the Swedish copper mining company Stora granted the Bishop of Västerås 12.5% ownership, it wasn’t until the early 17th century that organised stock-trading began to emerge.

It happened first in Amsterdam when the Dutch East India Company or Vereenigde Oostindische Compagnie (VOC) issued shares to the public for the first time. This was none other than the world’s first initial public offering (IPO), and provided the capital to fuel the growth of this trading company to become one of the largest multinationals of the era. At its peak, the VOC was worth more than Apple, Google and Facebook combined.

17th century plaque commemorating the Dutch East India Company.
Google Schmoogle.
Wikimedia

Two geographical factors played an important part in Amsterdam becoming a major financial centre. A significant portion of the famously flat Netherlands used to be submerged, which meant the Dutch were used to loaning money to fund land reclamation projects. The Netherlands was also heavily urbanised, with a large number of people available and willing to invest their money.

Formal futures markets, which allow people to bet on the future price of certain assets, also appeared in Amsterdam during the 17th century, reflecting the growing sophistication of financial activities in the city. The most notable centred on the tulip – VOC ships carried these flower bulbs into the country from places like Turkey. As prices for some bulbs reached extraordinarily high levels and then dramatically collapsed, the “tulip mania” is generally considered the first recorded speculative bubble in history, although the trading was not as irrational as usually thought.

The rise of London

Despite the early Dutch dominance in financial trading, organised stock trading really took shape with the advent of the Joint Stock Corporation Act in the UK in 1844. Coupled with the industrial revolution, this spurred financial activities to grow in London.

Locals and foreigners started making investments, which enabled the UK to support immense capital requirements during the industrial revolution and was integral to the sustained productivity and welfare improvements that ensued. Other European cities later also developed their own financial activities, driven by an incredible expansion of multinational trade.

Traders on the floor of the London Stock Exchange a few days before the Big Bang of 1986.
Traders on the floor of the London Stock Exchange a few days before the Big Bang of 1986.
John Sturrock/Alamy

But what really made London a magnet for global financial activity was the “Big Bang” of 1986. Until then, the city’s stock exchange was limited to relatively small partnerships of stockbrokers, market makers and the like.

But on October 27 1986, sweeping reforms abolished various constraints on financial transactions and competition, opening trading to a range of new actors, including foreign ones. The City of London turned into a global financial powerhouse, and would go from strength to strength for the next 35 years.

The Brexit effect

Then came Brexit. The trade deal agreed between the UK and the EU on Christmas Eve did not cover financial services. For now, London’s financiers have been barred from certain activities such as trading euro-denominated shares and bonds, which has moved mostly to Amsterdam as a result.

Amsterdam is emerging as the winner because the city hosts the operational headquarters of the stock exchange Euronext. Euronext’s origins can be traced back to the founding of the Amsterdam Stock Exchange by the Dutch East India Company and it has been the largest stock exchange in Europe for some time. It was always likely to benefit when London left the single market. In January an average of £8.1 billion of shares a day were traded in Amsterdam, compared with £7.6 billion in London.

Interior of Euronext Stock Exchange in Amsterdam
Inside the Euronext Stock Exchange in Amsterdam.
Horizons WWP/Alamy

To continue as a global powerhouse, the City of London is hoping the UK and EU regulators can agree on “equivalence”, which is a system that the EU uses to grant domestic market access to foreign firms in certain areas of financial services. But the prospect of a breakthrough agreement looks slim, especially as the EU is eager to capture a greater share of a market which London has dominated for so long.

While the lasting effects of Brexit on London probably won’t be known for years, the first day of business after the UK’s departure from the single market was a symbolic turning point. Public data on January 1 showed that London lost almost 45% of the usual volume of stock. Besides shares and bonds, other affected markets include carbon trading, with a €1 billion (£880 million) in daily volumes also moving to the Dutch capital.

Clearly, Europe’s dependence on British finance is no longer a given. There has been a steady trickle of financial institutions moving from London to other European capitals since the UK decision to leave the EU. As far back as October, it was reported that financial services firms operating in the UK had moved around 7,500 employees and more than £1.2 trillion of assets to the EU. Of the work that is shifting, it looks like a carve up: asset management to Dublin, banking to Frankfurt, and securities trading to Amsterdam.

Four hundred years after the dawning of the first era of Dutch financial pre-eminence, Amsterdam is suddenly the continent’s home of share trading once more. It will be interesting to see how the situation develops in the coming months.

Tags: #Amsterdam #ousts #London #Europes #top #share #hub #trading #began

Written by Edward Thomas Jones, Lecturer in Economics, Bangor University

This article by Edward Thomas Jones, Lecturer in Economics, Bangor University, originally published on The Conversation is licensed under Creative Commons 4.0 International(CC BY-ND 4.0).

How Apple and Google let your phone warn you if you’ve been exposed to the coronavirus while protecting your privacy

Virginia has enabled app-less COVID-19 exposure notification services for iPhone users, joining California, Colorado, Connecticut, Hawaii, Maryland, Minnesota, Nevada, Washington, Wisconsin and the District of Columbia. This means iPhone users in those states won’t need to install exposure notification apps and can instead turn on notifications in the phone’s settings.

The services use the coronavirus exposure notification system built jointly by
Apple and Google for their smartphone operating systems, iOS and Android, which the companies updated to work without apps. The system uses the ubiquitous Bluetooth short-range wireless communication technology.

As of January, 20 states and the District of Columbia are using the system for exposure notification apps and app-less services. All of the apps and services are voluntary; however, the island of Maui in Hawaii now requires visitors to use one.

Dozens of apps are being used around the world that alert people if they’ve been exposed to a person who has tested positive for COVID-19. Many of them also report the identities of the exposed people to public health authorities, which has raised privacy concerns. Several other exposure notification projects, including PACT, BlueTrace and the Covid Watch project, take a similar privacy-protecting approach to Apple’s and Google’s initiative.

Recently, a study found that contact tracing can be effective in containing diseases such as COVID-19 if large parts of the population participate. Exposure notification schemes like the Apple-Google system aren’t true contact tracing systems because they don’t allow public health authorities to identify people who have been exposed to infected individuals. But digital exposure notification systems have a big advantage: They can be used by millions of people and rapidly warn those who have been exposed to quarantine themselves.

So how does the Apple-Google exposure notification system work? As researchers who study security and privacy of wireless communication, we have examined the system’s specifications and have assessed its effectiveness and privacy implications.

Bluetooth beacons

Because Bluetooth is supported on billions of devices, it seems like an obvious choice of technology for these systems. The protocol used for this is Bluetooth Low Energy, or Bluetooth LE for short. This variant is optimized for energy-efficient communication between small devices, which makes it a popular protocol for smartphones and wearables such as smartwatches.

file 20200429 51489 yqqojq.jpg?ixlib=rb 1.1
Bluetooth allows phones that are near each other to communicate. Phones that have been near each other for long enough can approximate potential viral transmission.
Christoph Dernbach/picture alliance via Getty Images

Bluetooth LE communicates in two main ways. Two devices can communicate over the data channel with each other, such as a smartwatch synchronizing with a phone. Devices can also broadcast useful information to nearby devices over the advertising channel. For example, some devices regularly announce their presence to facilitate automatic connection.

To build an exposure notification app using Bluetooth LE, developers could assign everyone a permanent ID and make every phone broadcast it on an advertising channel. Then, they could build an app that receives the IDs so every phone would be able to keep a record of close encounters with other phones. But that would be a clear violation of privacy. Broadcasting any personally identifiable information via Bluetooth LE is a bad idea, because messages can be read by anyone in range.

Anonymous exchanges

To get around this problem, every phone broadcasts a long random number, which is changed frequently. Other devices receive these numbers and store them if they were sent from close proximity. By using long, unique, random numbers, no personal information is sent via Bluetooth LE.

Apple and Google follow this principle in their specification but add some cryptography. First, every phone generates a unique tracing key that is kept confidentially on the phone. Every day, the tracing key generates a new daily tracing key. Though the tracing key could be used to identify the phone, the daily tracing key can’t be used to figure out the phone’s permanent tracing key. Then, every 10 to 20 minutes, the daily tracing key generates a new rolling proximity identifier, which looks just like a long random number. This is what gets broadcast to other devices via the Bluetooth advertising channel.

Someone testing positive for COVID-19 can disclose a list of their daily tracing keys, usually from the previous 14 days. Everyone else’s phones use the disclosed keys to recreate the infected person’s rolling proximity identifiers. The phones then compare the COVID-19-positive identifiers with their own records of the identifiers they received from nearby phones. A match reveals a potential exposure to the virus, but it doesn’t identify the patient.

file 20200429 51500 967fes.jpg?ixlib=rb 1.1
The Australian government’s COVIDSafe app warns about close encounters with people who are COVID-19-positive. But unlike the Apple-Google system, COVIDSafe reports the contacts to public health authorities.
Florent Rols/SOPA Images/LightRocket via Getty Images

Most of the competing proposals use a similar approach. The principal difference is that Apple’s and Google’s operating system updates reach far more phones automatically than a single app can. Additionally, by proposing a cross-platform standard, Apple and Google allow existing apps to piggyback and use a common, compatible communication approach that could work across many apps.

No plan is perfect

The Apple-Google exposure notification system is very secure, but it’s no guarantee of either accuracy or privacy. The system can produce a large number of false positives because being within Bluetooth range of an infected person doesn’t necessarily mean the virus has been transmitted. And even if an app records only very strong signals as a proxy for close contact, it cannot know whether there was a wall, a window or a floor between the phones.

However unlikely, there are ways governments or hackers could track or identify people using the system. Bluetooth LE devices use an advertising address when broadcasting on an advertising channel. Though these addresses can be randomized to protect the identity of the sender, we demonstrated last year that it is theoretically possible to track devices for extended periods of time if the advertising message and advertising address are not changed in sync. To Apple’s and Google’s credit, they call for these to be changed synchronously.

But even if the advertising address and a coronavirus app’s rolling identifier are changed in sync, it may still be possible to track someone’s phone. If there isn’t a sufficiently large number of other devices nearby that also change their advertising addresses and rolling identifiers in sync – a process known as mixing – someone could still track individual devices. For example, if there is a single phone in a room, someone could keep track of it because it’s the only phone that could be broadcasting the random identifiers.

Another potential attack involves logging additional information along with the rolling identifiers. Even though the protocol does not send personal information or location data, receiving apps could record when and where they received keys from other phones. If this were done on a large scale – such as an app that systematically collects this extra information – it could be used to identify and track individuals. For example, if a supermarket recorded the exact date and time of incoming rolling proximity identifiers at its checkout lanes and combined that data with credit card swipes, store staff would have a reasonable chance of identifying which customers were COVID-19 positive.

And because Bluetooth LE advertising beacons use plain-text messages, it’s possible to send faked messages. This could be used to troll others by repeating known COVID-19-positive rolling proximity identifiers to many people, resulting in deliberate false positives.

Nevertheless, the Apple-Google system could be the key to alerting thousands of people who have been exposed to the coronavirus while protecting their identities, unlike contact tracing apps that report identifying information to central government or corporate databases.

This is an updated version of an article originally published on April 30, 2020.

[You need to understand the coronavirus pandemic, and we can help. Read The Conversation’s newsletter.]

Tags: #Apple #Google #phone #warn #youve #exposed #coronavirus #protecting #privacy

Written by Johannes Becker, Doctoral student in Electrical & Computer Engineering, Boston University

This article by Johannes Becker, Doctoral student in Electrical & Computer Engineering, Boston University, originally published on The Conversation is licensed under Creative Commons 4.0 International(CC BY-ND 4.0).

why a wave of huge companies like Tesla rushing to invest could derail the stock market

After Tesla announced it has invested US$1.5 billion in bitcoin and expects to start accepting the cryptocurrency as a payment for its electric vehicles in the near future, the bitcoin price went soaring. It went from around US$39,400 to an all-time high of over US$48,000 in less than 24 hours.

The price is now up by over 50% in the first six weeks of 2021. Led by Elon Musk, Tesla’s investment is obviously in profit already: depending on the exact day of the purchase, it is likely to be worth over US$2 billion, pointing to a paper profit of over US$500 million. To put that in context, when the electric car-maker made its first ever annual net profit in 2020, it was just over US$700 million.

The bitcoin price

Bitcoin price graph

TradingView

Tesla’s move into bitcoin comes on the back of a wave of institutional money invested in the world’s leading cryptocurrency in recent months, plus numerous other companies putting it into their treasury reserves. With the world’s sixth most valuable company also saying it might buy and hold other digital assets “from time to time or long term”,
it must be tempting for other major companies to do likewise. Since the Tesla announcement, Twitter finance director Ned Segal has already signalled that his company is considering such a move, while a research note from the Royal Bank of Canada has made a case for why it would benefit Apple.

The prospect of a bluechip invasion into bitcoin has caused much excitement among cryptocurrency investors. But if Tesla does trigger such a goldrush, there will also be some unsettling consequences.

Volatility spillover

Tesla justified this material change in the way it manages its treasury reserves by stating that investing in bitcoin will “provide us with more flexibility to further diversify and maximise returns on our cash”. Corporate treasurers have always used the money markets to invest surplus cash to eke out small yields, and it is harder than it used to be in the current long-term low interest rate environment.

All the same, this is very different to standard money management. Bitcoin is a highly volatile asset that you would not typically associate with the cash reserves on the balance sheet of a listed company worth close to a trillion US dollars. As recently as March 2020, the price dipped below US$4,000. Even in 2021, the price fell more than 30% before its most recent surge.

Tesls logo in front of a bitcoin
Fomo-ing at the mouth?
24K Production

Tesla has put almost 8% of its reserves into the cryptocurrency. If Apple, Microsoft, Facebook, Twitter and Google were to do the same, this would translate into almost another US$7 billion investment. This is less than 1% of the total current worth of the bitcoin market, but the signal that it would send to other companies and retail investors would likely trigger a bull run that would make the current market look comparably stable. Some crypto analysts are already predicting that the price will rise to US$100,000 or even US$200,000 before 2021 is out.

Such a rise would drive up the value of the bitcoin on corporate balance sheets to multiples of what it was at the time of investment. Tesla’s 8% allocation may already have gone up to 12% of the value of its reserves, for instance. And if it follows through on a potential plan to keep any bitcoins it receives for electric cars instead of converting them into dollars, that percentage could rise all the faster.

The problem is the potential effect on company share prices. Tesla’s share price rose 2% on the news of the bitcoin investment, though it has since fallen by 5%. But a longer term example is Canadian tech company Microstrategy. Its share price has ballooned tenfold in value in the past year on the back of a heavy investment into bitcoin, but is also down by almost a quarter in the days since the Tesla announcement.

Writ large, this could make stock markets far choppier in future – and vulnerable to a nosedive when the bitcoin bull market ends. It would be easy to imagine that this could prompt a wider wave of selling as investors sought to cover their loss-making positions, which could be very dangerous for financial stability.

What the regulators will do

Global regulators will no doubt be concerned about a potential volatility spillover from digital asset prices into traditional capital markets. They may not permit what could quickly amount to effective proxy approval by the back door for companies holding large proportions of a volatile asset on their balance sheets.

Gary Gensler portrait shot in front of US flag
Gary Gensler, new head of the SEC.
Wikimedia

We have already seen the likes of European Central Bank president Christine Lagarde and new US Treasury secretary Janet Yellen calling for more bitcoin regulation in recent weeks.

The view from US regulator the SEC will be extremely important, and it is difficult to predict the response of newly appointed head Gary Gensler, who is himself a crypto expert. We may see anything from a wait-and-see approach through to a ban on listed companies holding any bitcoin-like assets.

But I would expect that if the price of bitcoin continues towards US$100,000, there may be a regulatory restriction on the reserve percentage that listed companies can hold in digital assets. This would be similar to the US rule that companies cannot buy back more than 25% of the average daily volume of their own stock. Such a rule would force companies to sell bitcoin if a price increase meant their holdings broke the maximum level, creating a form of sell pressure that the crypto market has not seen before.

For now, however, bitcoin continues to look like a “buy” asset on the back of the Tesla announcement. The crypto community will be watching to see whether other major companies follow suit, and whether Tesla has the conviction to stay invested when its next quarterly announcement comes around. But if this trend continues, make no mistake that a reckoning will be coming over the prospect of the heady volatility of the crypto market going mainstream. Watch this space.

Tags: #wave #huge #companies #Tesla #rushing #invest #derail #stock #market

Written by Gavin Brown, Associate Professor in Financial Technology, University of Liverpool

This article by Gavin Brown, Associate Professor in Financial Technology, University of Liverpool, originally published on The Conversation is licensed under Creative Commons 4.0 International(CC BY-ND 4.0).

how their teenage immigrant inventor was forgotten by history

Around 975 Durex condoms are sold every minute. The global condom market is predicted to grow to over US$11 billion (£8 billion) by 2023, and Durex is in the privileged position of being the world’s most popular brand. Yet until recently, the young man who invented Durex’s mass-produced condom had been forgotten – even by the manufacturer itself.

The origins of Durex go back to the London Rubber Company, which began trading in 1915 and specialised in importing modern, disposable condoms for re-sale in Great Britain. In 1932 the firm underwent a game-changing switch from wholesaling to fabrication, when it started manufacturing in-house under the Durex brand. By the mid-1940s, London Rubber had the biggest production capacity in Britain, and by the mid-1960s, the world.

An unwrapped, unrolled 1967 condom, and a packet of three wrapped one.
1960s Durex Gossamer condom, photographed by Jessica Borge in 2018.
© Private collection of Jessica Borge, 2018, Author provided

As a social historian with an interest in businesses, I had been intrigued by London Rubber ever since a friend pointed out the then-derelict factory in Chingford in the late 1990s, after production was moved to Asia. I was fascinated by the idea that thousands of ordinary Londoners made their living from condoms, and set about researching the topic for my PhD. Turning this work into a book gave me the opportunity to deepen my research.

The one question I really wanted to answer was who actually invented Durex condoms. They have long been attributed to a man called Lionel Alfred Jackson, a third-generation Russian-Jewish immigrant who founded London Rubber in 1915. It was Jackson who, in 1929, patented the Durex trademark (standing for “Durability, Reliability, and Excellence”). But surely there was more to this story?

Red book cover reading 'Protective Practices: A History of the London Rubber Company and the Condom Business', JESSICA BORGE.

McGill-Queen’s University Press

My style of research involves the painstaking examination of documents. But as no company archive for London Rubber is available to researchers, my investigation has involved forensic detective work, with discoveries often coming about through hunches.

The memorable name “Lucian Landau” had popped up in correspondence between London Rubber and the Family Planning Association (held at Wellcome Collection) and on patents. But Landau wasn’t mentioned in the few official documents archived at Vestry House Museum, or in the company magazine London Image, which had been supplied to me by ex-London Rubber employee Angela Wagstaff. This piqued my interest: Landau must have been somebody important if he was on the patents.

I checked the British Library on the off-chance it held some reference to Landau and was thrilled to find a rare copy of his self-published autobiography. Incredibly, Landau had written the story of his involvement with London Rubber. I was able to triangulate his account with the rest of my evidence, and the pieces fell into place.

Black and white photo of smiling man in warehouse coat stood outside door with sign for 'British Latex Products'.
Lucian Landau, the inventor of Durex condoms, at the entrance to British Latex Products, 1932.
© Vestry House Museum and the London Borough of Waltham Forest, Author provided

What I discovered was that while Jackson came up with the business model for supplying condoms, the technology behind Durex was invented by Landau, a Polish teenager living in Highbury and studying rubber technology at the former Northern Polytechnic (now London Metropolitan University).

His story is fascinating: Landau had left London Rubber under a cloud in 1953 and was erased from the official company history. He then built a new life for himself as a medium and psychic investigator. Until now, Landau’s centrality to the modern condom has gone unrecognised. But if Landau was so important to London Rubber, how did he come to be erased from the history of Durex condoms?

Landau in London

Born in Warsaw, Poland, in 1912, Landau was sent to London to study rubber technology by his family, who were small-time industrialists dealing in rubber, perfume, cosmetics and soap.

Cover of old prospectus, with college name in large letters, surrounded by decorative border.
Example prospectus, Northern Polytechnic Institute, for the 1922-23 session. Landau began his rubber technology course c.1929-1930.
Reproduced by permission of London Metropolitan University’s Library and Special Collections., Author provided

It was 1929 and he was only 17. The expectation was that, once trained, he would return to Poland and take over his father’s business. But Landau soon realised he adored London and did not want to leave. “I felt more at home here than I ever felt in Warsaw,” he wrote in his autobiography. “I felt I could never leave this place.”

Attending rubber technology classes in Holloway and living with other boarders (and a singing parrot) in Highbury Place, his great pleasure was to explore the local area on foot. Upper Street and Chapel Market were favourites, and Landau quickly learned the streets of Islington, Hackney and Camden, the City and West End. “I was mainly interested in shop windows,” he said, “and particularly in various rubber articles.”

Having tasted independence, Landau was ready to go it alone in London but was ineligible to seek employment under his student visa. He could, however, start a business. His fascination with shop windows in this nation of shopkeepers would prove key.

Unconvinced of the quality of rubber toilet sponges available, Landau developed a new sponge process in the polytechnic’s lab, set up a manufacturing firm and offered employment. The Home Office granted him leave to remain, so long as business continued.

But Landau was dissatisfied with toilet sponges. It was while experimenting with a Pirelli latex sample and some glass tubing, Landau writes, that he hit upon the idea of making latex condoms. “I knew that these products were all imported from Germany and America and there was no British manufacturer,” he wrote. “The plant required would be simple to construct and I could probably make it myself.”

Black and white photo of large building with clock tower, with horse-drawn carts and carriages passing in front.
The Northern Polytechnic Institute, Holloway Road, 1906, where Lucian Landau first experimented with latex.
Reproduced by permission of London Metropolitan University’s Library and Special Collections., Author provided

Condoms have been around since ancient times. Early versions were often made from animal skin, and production became a cottage industry in places like 18th-century London, where women ran condom warehouses around Leicester Square and Covent Garden. Following Thomas Hancock’s discovery of vulcanisation – the heat treatment of rubber – in the 1840s, heavy-weight, re-usable rubber sheaths were made. But these were far from ideal, having a bulky and uncomfortable seam along the lower edge.

Improvements towards the end of the 19th century led to condoms being made by dipping condom-shaped formers into a sort of rubber cement. This got rid of the seam but the process involved inflammable solvents that sometimes led factories to catch fire. But by the 1920s, condoms were being made using a safe latex dipping process developed in Germany and America. Latex was relatively new in Britain, giving Landau a first-mover advantage.

Travelling condom salesman

Hoping to generate interest in his new product, Landau revisited the shops from his walks. As luck would have it, a “Mr French” who ran a retail pharmacy at the corner of Mare Street and Well Street, Hackney, suggested that Landau seek out the legendary Lionel Jackson.

A travelling condom salesman, Jackson had put in the leg work visiting retailers – chemists’ shops, herbalists and “hygienic stores” – up and down the country, selling bought-in condoms wholesale for his company, London Rubber. Well known and respected, Jackson offered stockists a personal service and competitive margins.

Landau’s expertise in the production of rubber consumables presented Jackson with an exciting prospect: the chance to compete directly with manufacturers. So Jackson loaned Landau £600 to set up British Latex Products, to supply condoms to London Rubber under the Durex brand. Jackson took a 60% controlling share, retaining Landau at £5 per week.


file 20190820 170910 8bv1s7.png?ixlib=rb 1.1

This article is part of Conversation Insights

The Insights team generates long-form journalism derived from interdisciplinary research. The team is working with academics from different backgrounds who have been engaged in projects aimed at tackling societal and scientific challenges.


Manual condom production began in 1932 underneath the “Rock-A-Bye” baby shoe works on a small industrial estate on Shore Road. Landau was barely 20 years old. Jackson gave Landau freedom to design and organise his plant without interference, and the two got along, Jackson being one of the few people at London Rubber whom Landau ever genuinely liked.

But in 1934, aged just 40, Jackson died from cancer of the spine. According to Landau’s autobiography, this had spread throughout his body and caused him enormous pain. His death was unexpected and there was no will, so his brother, Elkan, and sisters, Mrs Collins and Mrs Power, inherited the company. Operations were run by the young Angus Reid, who had been the first hire outside of the family eight years before Landau entered the scene.

Landau was not pleased with the new setup. He felt the remaining Jacksons to be “of limited intelligence” and found Elkan in particular a thorn in his side, although others at the firm spoke warmly of the man, who laughed a lot and brought everybody cakes. Years of animus followed and Landau experienced little contact with executives, preferring the methodical calm of his workshop.

Nonetheless, Landau was vital to London Rubber and remained in charge of production and R&D, overseeing the firm’s relocation to a purpose-built factory in Chingford in the late 1930s. It was here that Landau really began developing his technical ingenuity in condom production, especially during the second world war. Tested by wartime conditions, Landau had to find ways to produce a consistent product with varying qualities of latex.

From 1942, and under the wartime rationalisation scheme, latex was diverted to Chingford (and, crucially, away from competitors) so that London Rubber could supply the British forces with condoms. For a period, production was moved to a safer site underneath the Chingford viaduct, to protect workers (and condoms) from enemy bombing.

Historic Durex rolled condom and packaging
Durex prophylactic condom c.1942-1945, produced by London Rubber Chingford under the supervision of Lucian Landau and supplied to the Allied Forces.
© Private collection of Jessica Borge, 2018., Author provided

At this stage, condom production was still a largely manual process, although some of the actual dipping was electronically assisted. Important parts of the process, such as stripping and testing the finished product, were done entirely by hand. The Herculean efforts involved in wartime production saw Landau maximise yield under pressured conditions. This ensured that, coming out of the war, London Rubber was Britain’s biggest condom manufacturer.

The machines that made Durex

After the war, Landau was made a director of London Rubber. But although he oversaw some important business decisions (such as taking the company public in 1950) his strength and legacy lay in technical achievements, which I have been able to corroborate by comparing his autobiography with my databank of documentary sources.

In particular, he was responsible for designing the sophisticated “automated protective” lines installed at the Chingford factory, “protective” being the preferred London Rubber word for condom. Together with the Durex brand, forever synonymised with condoms following the war, Landau’s automated machines constituted the strongest barrier to competition wielded by London Rubber.

Though developed by Landau in the 1940s, the first two automated machines were not installed until 1950 and 1952, after the wartime rationing of steel had ended. Each was about 200 yards long and had two “double decker” production lines, wherein hollow glass formers were dipped into two latex baths, on a large conveyor belt. These “double-dipped” condoms were then heat-treated, rinsed and rolled up by spinning brushes before being passed through a chalk solution to prevent sticking. They were then collected into a chute for air testing.

Incredibly, colour footage of this entire process survives, having been included in an award-winning educational film, According to Plan, produced by the company in 1964, which I have written about elsewhere.

A woman in white tests condoms on an automated production line.
Electronic testing in the 1950s.
© Vestry House Museum and the London Borough of Waltham Forest, Author provided

Thanks to these marvellous machines, production increased from 2 million condoms a year in the early 1930s, when the latex dipping was done manually, to the same volume each week in the early 1950s. By 1954, weekly production ran at 2.5 million, and there was there was a 29-fold increase in output overall as lines were added between 1951 and 1960.

Landau wasn’t the only inventor to create dipping chains. Fred Killian in Akron, Ohio, developed a similar technology in the 1930s. But whereas American condom manufacturers (such as Youngs) had to lease the Killian model, London Rubber’s machines were unique in Britain and protected by patent, meaning London Rubber could produce at economies of scale unmatched by other local producers. The product was also top quality, as I found out when I repeated a 1960s consumer test by filling an original 1967 Durex Gossamer with five pints of water as part of a recent talk I delivered.

Aged Condom Challenge: Can a 53-year-old Durex hold five pints of water?

But success also marked the end of the line for Landau, who had been harbouring ill feeling ever since Lionel Jackson’s death. This had been exacerbated by alleged dodgy dealings during the war when, Landau claims in his autobiography, Elkan Jackson and Angus Reid sold condoms on the black market.

After the war, a taxman shadowed Landau for a whole month while the company was under investigation for tax evasion related to the alleged black-marketing. In the end, Landau was cleared but Jackson and Reid (Landau wrote) were fined. Landau insisted upon his appointment to the board and 20% of London Rubber shares as compensation for being put in such a difficult position.

But these incidents never left him. Bad feeling was compounded by a series of unfortunate events in the 1950s, which led to his leaving the company for good.

Landau’s last days

London Rubber was a family firm with a positive ethos, routinely marking the achievements of its staff with a characteristic sense of occasion that made it an eventful and pleasant place to work. Life events (such as marriages and babies), as well as loyalty, were marked by the ceremonial giving of clocks, watches, layettes and other gifts.

Landau, by all accounts a deep-feeling but standoffish man, tended to keep himself to himself. Nonetheless, by the time his 21st work anniversary came up in 1953, he had a reasonable expectation that his tremendous accomplishments would be recognised. But the anniversary slipped by unmentioned and Landau did not receive the customary gold watch. This is seemingly corroborated by the absence of his name on the Long Service wall of fame (which was photographed for the company magazine, London Image, in 1969) and Landau was completely elided from official company histories – until I rediscovered him.

Historians of contraception and condoms (and, indeed, the companies that subsequently inherited the Durex brand) cannot be blamed for overlooking Landau. I did the same before I looked deeper, particularly as the Lionel Jackson founding story was so convenient.

But at the time, the neglect of Landau’s milestone seemed deliberate to him, and in the absence of internal company records, his is the only account of the incident we have. Personal misfortune befell Landau in the summer of 1953 during a love affair with switchboard operator Alice Maud (during his second divorce), which made them the subject of office gossip. Tragically, Maud took her own life, leaving Landau reeling. He kept the two aspirin bottles she had emptied for the rest of his life.

Back at London Rubber, tongues were wagging and Landau was miserable. “I asked myself why I should continue to work with people whom I did not like, and who did not really appreciate all I was doing,” he wrote. Enough was enough, and he resigned. Although some colleagues were markedly upset (his secretary also resigned in sympathy) there was little love lost. Landau picked up his things in September 1953 and never set eyes upon London Rubber again.

Grey card packets with purple writing reading 'Durex gossamers'.
1960s Durex Gossamer, the first lubricated condom.
© Private collection of Jessica Borge, 2018., Author provided

At 42, Landau was still young and, with abundant shares and savings from London Rubber to support him, was free to pursue other interests alongside occasional consultancy work. He possessed a strong interest in spiritual matters, including mediumship (making contact with the dead), which was not unusual in the 20th century and was referenced fairly regularly in popular culture.

Landau’s interest became more important when, as Landau wrote in his autobiography, he began to hear Maud speaking to him after her death, advising on day-to-day matters such as catching the correct tube train, which Landau reported finding useful. Apparently she had also passed a message from beyond the grave to the clairvoyant Florence Thompson, who Landau happened to see shortly after resigning. The message was to the point: Maud was sorry for what she had done but could not undo it.

Economically independent, making new friends and forever moved by his relationship with Maud, Landau spent the next 50 years developing his abilities as a medium through the London Spiritualist Alliance (now College of Psychic Studies) and the Society for Psychical Research, in combination with investigating psychic phenomena.

Landau’s early adulthood had been monopolised by London Rubber, but his awareness of strange and seemingly inexplicable happenings had been with him since at least the 1930s, when he first started making condoms. Amazingly, there is video footage of a very elderly Landau recalling his “psychic experiences”, which I was directed to by the author Christopher Josiffe. It was only when I actually watched the video that I realised Landau had been captured describing a supposed incident of psychic healing that took place in the first factory in the 1930s, and described in his autobiography.

A girl’s hand had been crushed by a condom dipping rack but Landau, he wrote, healed her bloody, mangled fingers by holding them. The girl was apparently so grateful that she named her son after him. The video of him recounting this incident was filmed by Landau’s friend and colleague from the SPR, the late Mary Rose Barrington, who was a loyal supporter and wanted Landau’s achievements recognised. The society has kindly granted permission for me to show this excerpt for the first time.

Landau had many mysterious experiences throughout his life, which are recounted in various volumes written by and for the wider community of people interested in psychic phenomena. He also published many papers on dowsing, but never lost his aptitude for everyday technical solutions, doing odd jobs around the College of Psychic Studies, such as mending televisions and plumbing.

But it was through his participation in London psychic and spiritualist communities in later life that Landau ultimately found happiness, meeting his third wife Eileen in 1955, and moving to the Isle of Man in 1967. Lucian Landau, psychic investigator, medium, dowser and original Durex technologist died in 2001, satisfied that his life’s work was complete.

Landau’s automated machines were in use up until the closure of the Chingford production plant in the summer of 1994. Today, delightfully, Landau’s name has been restored to the official history of Durex following early publicity for my book.

I was hoping this would happen. It is easy to be cynical, but sometimes we only need look a little deeper to find the human story behind that which we take for granted.


file 20200204 41481 1n8vco4.png?ixlib=rb 1.1

For you: more from our Insights series:

To hear about new Insights articles, join the hundreds of thousands of people who value The Conversation’s evidence-based news. Subscribe to our newsletter.

Tags: #teenage #immigrant #inventor #forgotten #history

Written by Jessica Borge, Digital Collections (Scholarship) Manager at King’s College London Archives and Research Collections; Visiting Fellow in Digital Humanities, School of Advanced Study

This article by Jessica Borge, Digital Collections (Scholarship) Manager at King’s College London Archives and Research Collections; Visiting Fellow in Digital Humanities, School of Advanced Study, originally published on The Conversation is licensed under Creative Commons 4.0 International(CC BY-ND 4.0).