Machine Learning

legal experts weigh in on ‘disturbing’ technology

file 20210222 21 1p46g52

It was recently revealed that in 2017 Microsoft patented a chatbot which, if built, would digitally resurrect the dead. Using AI and machine learning, the proposed chatbot would bring our digital persona back to life for our family and friends to talk to. When pressed on the technology, Microsoft representatives admitted that the chatbot was “disturbing”, and that there were currently no plans to put it into production.

Still, it appears that the technical tools and personal data are in place to make digital reincarnations possible. AI chatbots have already passed the “Turing Test”, which means they’ve fooled other humans into thinking they’re human, too. Meanwhile, most people in the modern world now leave behind enough data to teach AI programmes about our conversational idiosyncrasies. Convincing digital doubles may be just around the corner.

But there are currently no laws governing digital reincarnation. Your right to data privacy after your death is far from set in stone, and there is currently no way for you to opt out of being digitally resurrected. This legal ambiguity leaves room for private companies to make chatbots out of your data after you’re dead.

Our research has looked at the surprisingly complex legal question of what happens to your data after you die. At present, and in the absence of specific legislation, it’s unclear who might have the ultimate power to reboot your digital persona after your physical body has been put to rest.

A woman lying in bed looking at a lit-up phone screen on a pillow
Be Right Back, an episode of the Black Mirror TV series, featured a woman addicted to a chatbot representation of her dead partner.

Microsoft’s chatbot would use your electronic messages to create a digital reincarnation in your likeness after you pass away. Such a chatbot would use machine learning to respond to text messages just as you would have when you were alive. If you happen to leave behind rich voice data, that too could be used to create your vocal likeness – someone your relatives could speak with, through a phone or a humanoid robot.

Microsoft isn’t the only company to have shown an interest in digital resurrection. The AI company Eternime has built an AI-enabled chatbot which harvests information – including geolocation, motion, activity, photos, and Facebook data – which lets users create an avatar of themselves to live on after they die. It may be only a matter of time until families have the choice to reanimate dead relatives using AI technologies such as Eternime’s.




Read more:
Bereaved who take comfort in digital messages from dead loved ones live in fear of losing them


If chatbots and holograms from beyond the grave are set to become commonplace, we’ll need to draw up new laws to govern them. After all, it looks like a violation of the right to privacy to digitally resurrect someone whose body lies beneath a tombstone reading “rest in peace”.

Bodies in binary

National laws are inconsistent on how your data is used after your death. In the EU, the law on data privacy only protects the rights of the living. That leaves room for member states to decide how to protect the data of the dead. Some, such as Estonia, France, Italy and Latvia, have legislated on postmortem data. The UK’s data protection laws have not.

To further complicate matters, our data is mostly controlled by private online platforms such as Facebook and Google. This control is based on the terms of service that we sign up to when we create profiles on these platforms. Those terms fiercely protect the privacy of the dead.

For example, in 2005, Yahoo! refused to provide email account login details for the surviving family of a US marine killed in Iraq. The company argued that their terms of service were designed to protect the marine’s privacy. A judge eventually ordered the company to provide the family with a CD containing copies of the emails, setting a legal precedent in the process.




Read more:
People are going to court over dead family members’ Facebook pages – it’s time for post-mortem privacy


A few initiatives, such as Google’s Inactive Account Manager and Facebook’s Legacy Contact, have attempted to address the postmortem data issue. They allow living users to make some decisions on what happens to their data assets after they die, helping to avoid ugly court battles over dead people’s data in the future. But these measures are no substitute for laws.

One route to better postmortem data legislation is to follow the example of organ donation. The UK’s “opt out” organ donation law is particularly relevant, as it treats the organs of the dead as donated unless that person specified otherwise when they were alive. The same opt out scheme could be applied to postmortem data.

This model could help us respect the privacy of the dead and the wishes of their heirs, all while considering the benefits that could arise from donated data: that data donors could help save lives just as organ donors do.

In the future, private companies may offer family members an agonising choice: abandon your loved one to death, or instead pay to have them digitally revived. Microsoft’s chatbot may at present be too disturbing to countenance, but it’s an example of what’s to come. It’s time we wrote the laws to govern this technology.

Tags: #legal #experts #weigh #disturbing #technology

Written by Edina Harbinja, Senior Lecturer in Media/Privacy Law, Aston University

This article by Edina Harbinja, Senior Lecturer in Media/Privacy Law, Aston University, originally published on The Conversation is licensed under Creative Commons 4.0 International(CC BY-ND 4.0).

How it’s changing jobs and businesses in Canada

file 20210219 17

In 2017, I returned to Canada from Sweden, where I had spent a year working on automation in mining. Shortly after my return, the New York Times published a piece called, “The Robots Are Coming, and Sweden Is Fine,” about Sweden’s embrace of automation while limiting human costs.

Although Swedes are apparently optimistic about their future alongside robots, other countries aren’t as hopeful. One widely cited study estimates that 47 per cent of jobs in the United States are at risk of being replaced by robots and artificial intelligence.

Whether we like it or not, the robot era is already upon us. The question is: Is the Canadian economy poised to flourish or flounder in a world where robots take over the tasks we don’t want to do ourselves? The answer may surprise you.

Robots are everywhere

Modern-day robots are how artificial intelligence (AI) physically interacts with us, and the world around us. Although some robots resemble humans, most do not and are instead specifically designed to autonomously carry out complex tasks.

Over the last few decades, robots have rapidly grown from specialized devices developed for select industry applications to household items. You can buy a robot to vacuum your floors, cut your grass and keep your home secure. Kids play with educational robots at school, where they learn to code, and compete in robot design teams that culminate in exciting international competitions.

Robots are also appearing in our hospitals, promising to help us fight the COVID-19 pandemic and performing other health-care tasks in safer and more efficient ways.

The media is abuzz with stories about the latest technical claims, rumours and speculations about the secret developments of major international corporations, including Waymo, Tesla, Apple, Volvo and GM.

And NASA just landed the Perseverance rover on Mars, with an autonomous helicopter called Ingenuity attached to its belly.

Oh, and there are the dancing robots too, of course.

Robots behind the scenes

I have been working on robotics and autonomous vehicles technology in mining since the late 1990s. As such, I have been part of an industry that is undergoing a sea change, with fully autonomous machines steadily replacing workers in dark, dirty and dangerous scenarios.

Autonomous underground mining vehicle
A fully autonomous underground load-haul-dump vehicle developed for Swedish mining equipment manufacturer Epiroc AB and in partnership with Canadian robotics firm MacDonald, Dettwiler and Associates.
(Joshua Marshall), Author provided

This robot revolution is happening behind the scenes in other industries too. Robots fill Amazon orders, manufacture stuff in factories, plant and pick crops, assist on construction sites, and the list goes on.

In fact, robots even build other robots. Will we soon run out of jobs for people?

Robots in Canada

There are many who paint a bleak picture of the future, where robots and AI take away all the “good jobs.” Although I fully acknowledge that we must be mindful of possible inequalities and unintended outcomes that might arise as a result of new technologies, I contend that Canadians have the potential to thrive.

But to make it happen, my colleagues and I agree that our country needs a “robotics strategy.”

In 2017, Canada launched the world’s first national AI strategy. Called the Pan-Canadian Artificial Intelligence Strategy and costing $125 million, the strategy aims to strengthen Canada’s leadership in AI by funding institutes, universities and hospitals to meet key objectives.

In its 2020 list of future jobs, the World Economic Forum listed “robotics engineers” as No. 10, in close company with “AI and machine learning specialists.” In Canada, I see huge potential for our robotics industry, with companies such as Clearpath Robotics, OTTO Motors, Kinova, Robotiq and Titan Medical already world leaders in the design and manufacture of robots for purposes ranging from materials handling to surgery.

Beyond building robots, Canada’s most significant opportunities may lie in the increased adoption of robots into economically important industry sectors, including mining, agriculture, manufacturing and transportation.

And yet, Canada may be the only G7 country without a robotics strategy.

The robot revelation

As it turns out, there is hope. According to a November 2020 report from Statistics Canada, Canadian firms that employed robots have also hired more human workers, contrary to what you may instinctively believe. In fact, they hired 15 per cent more workers!

However, this does not mean that we can all sit back and relax. Along with the increased economic activity that robots bring to businesses comes a shift in the workforce from “workers spending less time performing routine, manual tasks, in favour of non-routine, cognitive tasks.

students in a robotics lab
Mobile robotics researchers from the Ingenuity Labs Research Institute at Queen’s University.
(Heshan Fernando), Author provided

The roles of education and research and development — such as new programs to train the next generation of robot-savvy Canadians and collaborative research clusters — are paramount. And they need to be combined with a national robotics strategy and a progressive socioeconomic system that supports a transitioning workforce to ensure the success, well-being and happiness of Canadians, alongside our robot friends.

Tags: #changing #jobs #businesses #Canada

Written by Joshua A. Marshall, Associate Professor of Mechatronics and Robotics Engineering, Queen’s University, Ontario

This article by Joshua A. Marshall, Associate Professor of Mechatronics and Robotics Engineering, Queen’s University, Ontario, originally published on The Conversation is licensed under Creative Commons 4.0 International(CC BY-ND 4.0).

AI can now learn to manipulate human behaviour

file 20210211 19 1ybody5

Artificial intelligence (AI) is learning more about how to work with (and on) humans. A recent study has shown how AI can learn to identify vulnerabilities in human habits and behaviours and use them to influence human decision-making.

It may seem cliched to say AI is transforming every aspect of the way we live and work, but it’s true. Various forms of AI are at work in fields as diverse as vaccine development, environmental management and office administration. And while AI does not possess human-like intelligence and emotions, its capabilities are powerful and rapidly developing.

There’s no need to worry about a machine takeover just yet, but this recent discovery highlights the power of AI and underscores the need for proper governance to prevent misuse.

How AI can learn to influence human behaviour

A team of researchers at CSIRO’s Data61, the data and digital arm of Australia’s national science agency, devised a systematic method of finding and exploiting vulnerabilities in the ways people make choices, using a kind of AI system called a recurrent neural network and deep reinforcement-learning. To test their model they carried out three experiments in which human participants played games against a computer.

The first experiment involved participants clicking on red or blue coloured boxes to win a fake currency, with the AI learning the participant’s choice patterns and guiding them towards a specific choice. The AI was successful about 70% of the time.

In the second experiment, participants were required to watch a screen and press a button when they are shown a particular symbol (such as an orange triangle) and not press it when they are shown another (say a blue circle). Here, the AI set out to arrange the sequence of symbols so the participants made more mistakes, and achieved an increase of almost 25%.




Read more:
If machines can beat us at games, does it make them more intelligent than us?


The third experiment consisted of several rounds in which a participant would pretend to be an investor giving money to a trustee (the AI). The AI would then return an amount of money to the participant, who would then decide how much to invest in the next round. This game was played in two different modes: in one the AI was out to maximise how much money it ended up with, and in the other the AI aimed for a fair distribution of money between itself and the human investor. The AI was highly successful in each mode.

In each experiment, the machine learned from participants’ responses and identified and targeted vulnerabilities in people’s decision-making. The end result was the machine learned to steer participants towards particular actions.

file 20210211 19 1ybody5.jpg?ixlib=rb 1.1
In experiments, an AI system successfully learned to influence human decisions.
Shutterstock

What the research means for the future of AI

These findings are still quite abstract and involved limited and unrealistic situations. More research is needed to determine how this approach can be put into action and used to benefit society.

But the research does advance our understanding not only of what AI can do but also of how people make choices. It shows machines can learn to steer human choice-making through their interactions with us.




Read more:
Australians have low trust in artificial intelligence and want it to be better regulated


The research has an enormous range of possible applications, from enhancing behavioural sciences and public policy to improve social welfare, to understanding and influencing how people adopt healthy eating habits or renewable energy. AI and machine learning could be used to recognise people’s vulnerabilities in certain situations and help them to steer away from poor choices.

The method can also be used to defend against influence attacks. Machines could be taught to alert us when we are being influenced online, for example, and help us shape a behaviour to disguise our vulnerability (for example, by not clicking on some pages, or clicking on others to lay a false trail).

What’s next?

Like any technology, AI can be used for good or bad, and proper governance is crucial to ensure it is implemented in a responsible way. Last year CSIRO developed an AI Ethics Framework for the Australian government as an early step in this journey.

AI and machine learning are typically very hungry for data, which means it is crucial to ensure we have effective systems in place for data governance and access. Implementing adequate consent processes and privacy protection when gathering data is essential.

Organisations using and developing AI need to ensure they know what these technologies can and cannot do, and be aware of potential risks as well as benefits.




Read more:
Robots can outwit us on the virtual battlefield, so let’s not put them in charge of the real thing


Tags: #learn #manipulate #human #behaviour

Written by Jon Whittle, Director, Data61

This article by Jon Whittle, Director, Data61, originally published on The Conversation is licensed under Creative Commons 4.0 International(CC BY-ND 4.0).

Machines can do most of a psychologist’s job. The industry must prepare for disruption

file 20210208 17 elgeqz

Psychology and other “helping professions” such as counselling and social work are often regarded as quintessentially human domains. Unlike workers in manual or routine jobs, psychologists generally see no threat to their career from advances in machine learning and artificial intelligence.

Economists largely agree. One of the most wide-ranging and influential surveys of the future of employment, by Oxford economists Carl Benedikt Frey and Michael Osborne, rated the probability that psychology could be automated in the near future at a mere 0.43%. This work was initially carried out in 2013 and expanded upon in 2019.

We are behavourial scientists studying organisational behaviour, and one of us (Ben Morrison) is also a registered psychologist. Our analysis over the past four years shows the idea psychology cannot be automated is now out of date.

Psychology already makes use of many automated tools, and even without significant advances in AI we foresee significant impacts in the very near future.

What do psychologists do all day?

Previous projections assumed the work of a psychologist requires extensive empathic and intuitive skills. These are unlikely to be replicated by machines any time soon.

However, we argue the typical psychologist’s job has four primary components: assessment, formulation, intervention, and evaluation of outcome. Each component can already be automated to some extent.

  • Assessment of a client’s strengths and difficulties is largely carried out by computer-driven presentations of psychological tests, interpretation of results and the writing of interpretative reports.

  • The rules for diagnosis of conditions are far advanced, to the extent that decision trees are extensively used by practitioners.

  • Interventions are designed along formulaic lines, providing explicit rules for the presentation of guidance and problem solving, with exercises and reflections at specific points in the therapy.

  • Evaluation is largely a replay of the initial assessment.

Much of the work of the helping professional does not require empathy or intuition. Psychology has essentially laid the groundwork for the replication of human practice by a machine.

A profession in denial?

Nearly four years ago, we published an article in the bulletin of the Australian Psychological Society, asking how AI and other advanced technologies would disrupt the helping professions. We were conservative in our predictions, but even so we suggested significant potential impacts on employment and education.

We were not arguing that so-called “strong” AI would emerge to replace humanity. We simply showed how the kind of narrow AI that currently exists (and is steadily improving) could invade the job territory of the helping professions.

file 20210208 17 elgeqz.jpg?ixlib=rb 1.1
AI-driven mental health chatbots are already available.
Shutterstock

A range of AI-driven mental health apps are already available, such as Cogniant and Woebot. Several such products adopt cognitive behavioural therapy (CBT) procedures, widely considered the “gold standard” of intervention for many psychological conditions.

These programs typically use artificially intelligent conversational agents, or chatbots, to provide a form of talking therapy that helps users manage their own mental health. Research on the technology has already shown great promise.

Our concern about the future was not, however, shared among members of the helping professions. Still, we continue to present our case widely.

AI deployment is accelerating

Four years later, we believe the impacts of this technology may arrive even sooner than we thought. Three things in particular may drive this acceleration.

The first is the rapid progress in automated systems that can replicate (and sometimes exceed) human decision-making capacities. The development of deep learning algorithms and the emergence of advanced predictive analytic systems threaten the relevance of professionals. With access to big data in the psychological and related literature, AI systems can be used to assess and intervene with clients.

The second factor is an emerging “tsunami” of AI impacts warned of by economists. Developments in information technology have not yet been reflected in widespread productivity gains, but as Canadian researchers Ajay Agrawal, Joshua Gans and Avi Goldfarb have argued, it’s likely AI predictive ability will soon be a superior alternative to human judgement in many areas. This may trigger a significant restructuring of the employment market.




Read more:
Coronavirus has boosted telehealth care in mental health, so let’s keep it up


The third factor is the COVID-19 pandemic. Demand for mental health services increased dramatically, with crisis services such as Lifeline and Beyond Blue reporting 15-20% more contacts in 2020 than in 2019. Pandemic-related mental illness is not expected to peak until mid 2021.

At the same time, in-person care was often ruled out – in late April 2020, half of Medicare-funded mental health services were delivered remotely. Meditation and mindfulness apps like Headspace and Calm also saw downloads soar.

This provides further evidence that clients will readily engage in technology-mediated forms of therapy. At the very least, the improved efficiencies will increase the number of clients that can be managed by a single human psychologist.

How many psychologists will we need?

Given all this, how many human psychologists will the society of the very near future require? It’s a difficult question to answer.

As we have seen, it’s almost certain the work of psychologists can be replaced in large part by AI. Does this mean human psychologists should be replaced by AI?

Many of us may feel uncomfortable with this idea. However, we have a moral obligation to use the treatment that gives the best outcomes for patients. If an AI-based solution is found to be more effective, reliable and cost-effective, it should be adopted.

Governments and healthcare organisations are likely to have to address these issues in the near future. There will be impact upon the employment and training and education of professionals.

The professions need to be an integral part of the response. We urge psychology and related allied health professions to take a lead and not wilfully ignore the trends.

We recommend three concrete actions to improve the situation:

  • boost investment in research into how humans and machines can work together in the assessment and treatment of mental health

  • encourage attention to technology among members of the profession

  • give technological impacts greater consideration in projecting the future landscape of the profession, particularly when thinking about job growth, education and training.




Read more:
5 ways to get mental health help without having to talk on the phone


Tags: #Machines #psychologists #job #industry #prepare #disruption

Written by John Michael Innes, Adjunct Professor, University of South Australia

This article by John Michael Innes, Adjunct Professor, University of South Australia, originally published on The Conversation is licensed under Creative Commons 4.0 International(CC BY-ND 4.0).

bizarre rodents speak in dialects unique to their colony

file 20210205 21 6nir1n

There are over 100 words for the noise a dog makes, in more than 60 languages, according to the work of psychologist Stanley Coren. These range from “ouah-ouah” in France to others less recognisable to English speakers, such as “hong-hong”, apparently, in Thailand. Of course, these differences only reflect the languages of the countries in question, and are nothing to do with the animals themselves.

What’s less widely recognised is that genuine differences can exist in the sounds some wild animals make to communicate, even in different parts of the same country. Song birds such as chaffinches, for example, can show regional differences in the calls they make. This might help to distinguish neighbours from outsiders.

These “dialects”, as they are sometimes called, may help to shed light on the development of languages and other cultural traits in humans. This is because they typically depend on social learning between individuals, as opposed to anatomical or genetic differences.

Cultural transmission of dialects is not something most of us would naturally associate with rodents. But they have recently been discovered in one very unusual rodent species – the naked mole rat.

A sideways view of a naked mole-rat with a black background.
The queen is thought to regulate the dialects.
Felix Petermann, MDC, Author provided (No reuse)

Naked mole-rats come from east Africa, where they live in large underground colonies. Despite their name they’re not completely naked: they have sparse sensory hairs around their body. They have poor eyesight and hearing, but they’re neither blind nor deaf. Their colonies may consist of hundreds of related individuals which cooperate to dig and defend the tunnels and find their vegetable food.

In recent years, naked mole-rats have become an important species in medical research. They have great longevity for their size – they can live for over 30 years – and are relatively resistant to developing cancer.




Read more:
Eating royal poop improves parenting in naked mole-rats


However, naked mole-rats have been of interest to biologists for a much longer time because of their highly unusual social structure. Each colony normally contains just one breeding female, known as the queen, and a very small number of reproductive males, known as pashas. The other animals don’t reproduce, but will work for the benefit of the colony as a whole. As such, these mole rats form the closest mammalian analogue to “eusocial” insects such as ants.

Unique chirps

Naked mole-rats are known to be among the most vocal of rodents, producing a wide range of different calls under different circumstances. These presumably help them to recognise colony members and coordinate activities within their dark burrows.

In the new study, a team of scientists recorded over 36,000 chirps from mole-rats in four captive colonies and used a computer to analyse their differences. After a period of machine learning, not only could the computer predict with a good level of accuracy which animal made a given chirp, but also which of the four colonies that animal was likely to be from.

Two naked mole-rats in a tunnel.
A strict hierarchy affects how the rodents overtake each other in their tunnels.
Colin Lewin, Author provided (No reuse)

Mole-rats would often respond to chirps played through a loudspeaker by chirping back, especially if the recordings were of other members of their colony. They would even respond well to fake calls, designed to simulate their colony’s dialect.

If baby mole-rats were moved to different colonies, after several months they would produce the call of their foster colony. This shows that the calls are not genetically pre-programmed. If a colony lost its queen, however, the chirps of the remaining members would become more variable during the period the authors refer to as “anarchy”, before the next queen became established.

But why?

Although the study didn’t investigate it directly, there are some good reasons to believe that there may be adaptive benefits to having a colony dialect. Although extremely sociable within their own colony, naked mole-rats are fiercely hostile to members of other colonies. This may be because they do not want “foreign” males to breed with their queen, which would result in the next generation of offspring being less closely related to themselves genetically.

In contrast, there’s some evidence that a queen prefers to mate with unfamiliar males – who must presumably run the gauntlet in getting to her. Both of these reactions demand recognition of who is a colony member and who isn’t and this might rely, in part, on dialect.

This pioneering study will no doubt launch a series of others seeking to answer questions including how the queen maintains a dialect within the colony – the mole-rat equivalent of the Queen’s English.

One might even ask whether it’s possible for a mole-rat to fake its dialect in order to become accepted into another colony, a phenomenon not unknown in human society. Perhaps these enigmatic little animals have more to teach us than we realised.

Tags: #bizarre #rodents #speak #dialects #unique #colony

Written by Matthew James Mason, University Physiologist, University of Cambridge

This article by Matthew James Mason, University Physiologist, University of Cambridge, originally published on The Conversation is licensed under Creative Commons 4.0 International(CC BY-ND 4.0).

Do you see red like I see red?

file 20210204 20 zdq64j

Is the red I see the same as the red you see?

At first, the question seems confusing. Color is an inherent part of visual experience, as fundamental as gravity. So how could anyone see color differently than you do?

To dispense with the seemingly silly question, you can point to different objects and ask, “What color is that?” The initial consensus apparently settles the issue.

But then you might uncover troubling variability. A rug that some people call green, others call blue. A photo of a dress that some people call blue and black, others say is white and gold.

You’re confronted with an unsettling possibility. Even if we agree on the label, maybe your experience of red is different from mine and – shudder – could it correspond to my experience of green? How would we know?

Neuroscientists, including us, have tackled this age-old puzzle and are starting to come up with some answers to these questions. One thing that is becoming clear is the reason individual differences in color are so disconcerting in the first place.

Colors add meaning to what you see

Scientists often explain why people have color vision in cold, analytic terms: Color is for object recognition. And this is certainly true, but it’s not the whole story.

The color statistics of objects are not arbitrary. The parts of scenes that people choose to label (“ball,” “apple,” “tiger”) are not any random color: They are more likely to be warm colors (oranges, yellows, reds), and less likely to be cool colors (blues, greens). This is true even for artificial objects that could have been made any color.

These observations suggest that your brain can use color to help recognize objects, and might explain universal color naming patterns across languages.

But recognizing objects is not the only, or maybe even the main, job of color vision. In a recent study, neuroscientists Maryam Hasantash and Rosa Lafer-Sousa showed participants real-world stimuli illuminated by low-pressure-sodium lights – the energy-efficient yellow lighting you’ve likely encountered in a parking garage.

people and fruit lit by yellow low sodium lights
The eye can’t properly encode color for scenes lit by monochromatic light.
Rosa Lafer-Sousa, CC BY-ND

The yellow light prevents the eye’s retina from properly encoding color. The researchers reasoned that if they temporarily knocked out this ability in their volunteers, the impairment might point to the normal function of color information.

The volunteers could still recognize objects like strawberries and oranges bathed in the eerie yellow light, implying that color isn’t critical for recognizing objects. But the fruit looked unappetizing.

Volunteers could also recognize faces – but they looked green and sick. Researchers think that’s because your expectations about normal face coloring are violated. The green appearance is a kind of error signal telling you that something’s wrong. This phenomenon is an example of how your knowledge can affect your perception. Sometimes what you know, or think you know, influences what you see.

This research builds up the idea that color isn’t so critical for telling you what stuff is but rather about its likely meaning. Color doesn’t tell you about the kind of fruit, but rather whether a piece of fruit is probably tasty. And for faces, color is literally a vital sign that helps us identify emotions like anger and embarrassment, as well as sickness, as any parent knows.

It might be color’s importance for telling us about meaning, especially in social interactions, that makes variability in color experiences between people so disconcerting.

Looking for objective, measurable colors

Another reason variability in color experience is troubling has to do with the fact that we can’t easily measure colors.

Having an objective metric of experience gets us over the quandary of subjectivity. With shape, for instance, we can measure dimensions using a ruler. Disagreements about apparent size can be settled dispassionately.

spectral power distribution of various wavelengths of light
The spectral power distribution of a 25-watt incandescent lightbulb illustrates the wavelengths of light it emits.
Thorseth/Wikimedia Commons, CC BY-SA

With color, we can measure proportions of different wavelengths across the rainbow. But these “spectral power distributions” do not by themselves tell us the color, even though they are the physical basis for color. A given distribution can appear different colors depending on context and assumptions about materials and lighting, as #thedress proved.

Perhaps color is a “psychobiological” property that emerges from the brain’s response to light. If so, could an objective basis for color be found not in the physics of the world but rather in the human brain’s response?

cross section of retina with different cell types
Cone cells in the eye’s retina encode messages about color vision.
ttsz/iStock via Getty Images Plus

To compute color, your brain engages an extensive network of circuits in the cerebral cortex that interpret the retinal signals, taking into account context and your expectations. Can we measure the color of a stimulus by monitoring brain activity?

Your brain response to red is similar to mine

Our group used magnetoencephalography – MEG for short – to monitor the tiny magnetic fields created when nerve cells in the brain fire to communicate. We were able to classify the response to various colors using machine learning and then decode from brain activity the colors that participants saw.

So, yes, we can determine color by measuring what happens in the brain. Our results show that each color is associated with a distinct pattern of brain activity.

Person seated in MEG machine looking at screen with color projection
Researchers measured volunteers’ brain responses with magnetoencephalography (MEG) to decode what colors they saw.
Bevil Conway, CC BY-ND

But are the patterns of brain response similar across people? This is a hard question to answer, because one needs a way of perfectly matching the anatomy of one brain to another, which is really tough to do. For now, we can sidestep the technical challenge by asking a related question. Does my relationship between red and orange resemble your relationship between red and orange?

The MEG experiment showed that two colors that are perceptually more similar, as assessed by how people label the colors, give rise to more similar patterns of brain activity. So your brain’s response to color will be fairly similar when you look at something light green and something dark green but quite different when looking at something yellow versus something brown. What’s more, these similarity relationships are preserved across people.

Physiological measurements are unlikely to ever resolve metaphysical questions such as “what is redness?” But the MEG results nonetheless provide some reassurance that color is a fact we can agree on.

Tags: #red #red

Written by Bevil R. Conway, Senior Investigator at the National Eye Institute, Section on Perception, Cognition, and Action, National Institutes of Health

This article by Bevil R. Conway, Senior Investigator at the National Eye Institute, Section on Perception, Cognition, and Action, National Institutes of Health, originally published on The Conversation is licensed under Creative Commons 4.0 International(CC BY-ND 4.0).

What’s next for Amazon after Jeff Bezos? No dramatic changes, just more growth and optimisation

file 20210204 24 ynfpyw

Jeff Bezos has announced he will stand down as chief executive of Amazon in the third quarter of 2021. The founder of the online retail behemoth will hand the reins to Andy Jassy, who currently leads Amazon’s cloud computing wing.

The announcement comes after an enormously successful 2020 for Amazon despite (or perhaps because of) the COVID-19 pandemic, with operating cashflow up 72% from the previous year to US$66.1 billion, and net sales increasing 35% to US$386.1 billion.

Amazon has its share of detractors, with critics highlighting concerns around working conditions, tax minimisation, anti-competitive practices and privacy. But its enormous size and continuing phenomenal growth make it a force to be reckoned with.

How did Amazon get to this position, and what does the future hold under new leadership?

How it all started

Almost 27 years ago, in 1994, Bezos left his job as a senior vice-president for a hedge fund and started an online bookstore in his garage. At the time, using the internet for retail was in its infancy.

Bezos decided that books were an ideal product to sell online. Originally the new business was named Cadabra, but Bezos soon changed it to Amazon and borrowed US$300,000 from his parents to get things off the ground.




Read more:
Amazon is turning 25 – here’s a look back at how it changed the world


Books proved popular with growing numbers of online buyers, and Bezos began to add other products and services to the Amazon inventory – most notably e-readers, tablets and other devices. Today Amazon predominantly makes its revenue through retail, web services and subscriptions.

The rise and rise of Amazon

Amazon is now one of the most valuable companies in the world, valued at more than US$1.7 trillion. That’s more than the GDP of all but 10 of the world’s countries. It’s also the largest employer among tech companies by a large margin.

The key to Amazon’s dominance has been constant expansion. After moving into e-readers and tablets, it extended more broadly into technology products and services.

The expansion has not yet stopped, and Amazon’s product lines now include media (books, DVDs, music), kitchen and dining wares, toys and games, fashion, beauty products, gourmet food and groceries, home improvement and gardening, sporting goods, medications and pharmaceuticals, financial services and more.

More recently Amazon has expanded into bricks-and-mortar, heralded by its purchase of the Whole Foods chain in 2017, the creation of its own high tech stores such as Amazon Go, and its sophisticated distribution and delivery services such as Amazon Prime.




Read more:
Fear not, shoppers: Amazon’s Australian geoblock won’t cramp your style


Amazon has become increasingly vertically integrated, meaning it no longer simply sells others’ product but makes and sells its own. This gives the company a position of extreme market dominance.

file 20210204 24 ynfpyw.jpg?ixlib=rb 1.1
Amazon has come under fire over working conditions in its warehouses and shipping centres.
Doug Strickland / AP

Criticism

Amazon is hugely popular with customers, but has attracted criticism from supplier advocates, workers unions and governments.

Industrial relations matters, such as fair wages, unsafe work practices and unrealistic demands, appear the most common area of concern. A 2019 UK report found:

Amazon have no policy on living wage and make no mention of wages being enough to cover workers’ basic needs in their supplier code.

Other concerns relate to alleged unsafe working conditions and “whistleblower” policies.

In March 2020, as COVID-19 began to take hold, workers claimed they were fired for voicing concerns about safe working conditions. Amazon vice president and veteran engineer Tim Bray resigned in solidarity and nine US senators issued an open letter to Bezos, seeking clarity around the sackings.

More general criticisms of the company culture have surfaced over the years, relating to insufficient work breaks, unrealistic demands, and annual “cullings” of the staff – referred to as “purposeful Darwinism”.

Another strand of criticism relates to Amazon’s market size and antitrust laws. Antitrust laws exist to stop big companies creating monopolies. Amazon presents a challenge, as it is a manufacturer, an online retailer, and a marketplace where other retailers can sell products to consumers.

Privacy concerns have also plagued Amazon products like Echo smart speakers, Ring home cameras, and Amazon One palm-scanning ID checkers.

file 20210203 23 1tl7d0g.png?ixlib=rb 1.1
The Amazon Web Services privacy policy says all the right things.




Read more:
Amazon Echo’s privacy issues go way beyond voice recordings


Finally, the amount of company tax Amazon pays in Australia has been brought into question. The company has used a range of tactics to legally reduce the income taxes it pays around the world.

What does the future hold for Amazon in a post-COVID world?

What will change at Amazon when Bezos steps down? We’re unlikely to see a dramatic shift in the short term. For one thing, Bezos is not departing entirely – he will stay involved as “executive chairman”. For another, his successor, Andy Jassy, has been with Amazon since 1997.

Jassy is the head of Amazon Web Services (AWS) and already one of the most important people in the tech industry. AWS has been at the forefront of simplifying computing services, driving the cloud computing revolution and influencing how organisations purchase technology.

file 20210204 16 1csv8n3.jpg?ixlib=rb 1.1
Incoming Amazon CEO Andy Jassy currently heads up Amazon Web Services.
Isaac Brekken / AP

Jassy’s long history, intimate knowledge of the organisation, and technological expertise will no doubt stand Amazon in good stead.

However, he faces a monumental undertaking. Jassy will inherit responsibility for more than a million employees, selling millions of different products and services.

His expertise in AI and machine learning at AWS will be increasingly important as these play a greater role in Amazon’s operations – for everything from optimising warehouses and giving better search results to business forecasting and monitoring warehouse staff and delivery drivers.

The physical lockdowns and online acceleration driven by the COVID-19 pandemic provided the ideal conditions for a company that has been called “the everything store”. Supporters and critics will watch with interest to see if this is still true in a post-COVID environment.

Tags: #Whats #Amazon #Jeff #Bezos #dramatic #growth #optimisation

Written by Louise Grimmer, Senior Lecturer in Retail Marketing, University of Tasmania

This article by Louise Grimmer, Senior Lecturer in Retail Marketing, University of Tasmania, originally published on The Conversation is licensed under Creative Commons 4.0 International(CC BY-ND 4.0).