What Does It Mean to Be Human?

Artificial intelligence is one of the most intriguing developing fields in computer science – the desire to have computer systems which can think, learn, and function autonomously. The development of these systems starts to beg the question, however, what does it truly mean to be human. Is it intelligence? Well, computers could be considered much more intelligent than humans already – if you put a complex equation into a programme like WolframAlpha, it can be solved in seconds. Is it emotions? The ‘emotional AI’ sector already processes large datasets to determine people’s emotions (for example in advertising, it is often used to gauge people’s reactions to a product).

When humanness is so hard to define, how do we know when we have created true artificial intelligence? In 1963, Dan Curtis proposed Artificial Psychology, a series of conditions that must be fulfilled in order for an artificially intelligent system to be recognised as comparable to the complexity of human intelligence.

  1. The artificially intelligent system makes all of its decisions autonomously (without supervision or human intervention) and is capable of making decisions based on information that is 1) New, 2) Abstract, and 3) Incomplete.
  2. The artificially intelligent system is capable of reprogramming itself (evolving), based on new information and is capable of resolving its own programming conflicts, even in the presence of incomplete information.
  3. Conditions 1 and 2 are met in situations that were not part of the original operational system (part of the original programming), i.e., novel situations that were not foreseen in the design and initial implementation of the system.

So it seems that human intelligence could be considered to be the ability to learn, face new situations, and apply prior knowledge. Artificial intelligence is broader than just machine learning, which only concerns the use of computers to mimic cognitive function of humans. The system needs to be able to truly able to think for itself.

One issue with this, however, is that humans are very social creatures, and so the ability to feel emotion and follow social cues is an integral element of what it means to be human. Studies have shown that humans mindlessly apply social rules and expectations to these computer systems, from gender roles to ethnic identities. They act with politeness and reciprocity, even when engaging with computers. So perhaps the need to create emotionally intelligent AI is not quite as pertinent – we will create that ourselves.

While the question of humanness still stands, the pursuits of the AI field will be incredibly exciting to watch in years to come.

Therapy for Alzheimer’s

Affecting an estimated 44 million people worldwide, Alzheimer’s disease is the most common neurodegenerative disease and cause of dementia. Every 65 seconds someone in the USA develops Alzheimer’s, and in Singapore 1 in 10 people over the age of 60 are estimated to suffer from dementia. After decades of research, scientists are still puzzled as to the precise causes of Alzheimer’s with no proven therapy at present.

Image result for alzheimer's

There are two competing beliefs which claim to explain the cause of Alzheimer’s:

  1. The accumulation of a protein called amyloid-beta in the brain
  2. Metabolic dysfunction of the cell’s energy-producing powerhouse; the mitochondria

In a study conducted at Yale-NUS College by Jan Gruber, it was found metabolic defects occurred significantly earlier than any notable rise in the number of amyloid-beta protein could be found. A small worm called Caenorhabditis elegans was used to identify these changes due to the similar properties it has with human cells at the molecular level. This knowledge became significant when they found the treatment of these worms with Metformin, a common anti-diabetes drug, revoked these changes and normalised the lifespan and health of the worm.

Despite billions of dollars in investment, trials of Alzheimer’s drugs which target proteins are unsuccessful. The strong links which have been found between Alzheimer’s pathology and mitochondrial disorder suggest a different protective strategy. Direct targeting of metabolic defects from an early stage, before protein aggregates exist, provides promising possibilities.

Both metabolic and mitochondrial disorders are integral elements of ageing in the common context. Consequently, age related diseases, such as Alzheimer’s, should be seen as manifestations of ageing. It may be easier to stop or remedy age related diseases by targeting the instruments of ageing instead of treating singular diseases after symptoms arise.

Sexually frustrated weed

Today, confiscated weed is 3 times more potent than the mid 1990s, growing smaller and faster. It comes in hundreds of strains with different kinds of flavours and highs. Humans have learnt to engineer marijuana, and this has significant implications for users.

Image result for weed

Female cannabis plants produce much larger psychoactive flowers, and consequently they are better at getting people high. The main psychoactive chemical is THC (tetrahydrocannabinol), with CBD (cannabidiol) being the second major chemical, which reduces anxiety. Cannabis has over 100 different compounds which affect the human body, called cannabinoids. We produce a lot of these same chemicals naturally, such as during food regulation and forgetting processes. By selecting the correct combination of cannabinoids over several plant generations growers can produce hybrids through crossbreeding, such as pineapple express.

Many processes influenced the production of hybrids. Until the late 1970’s, the only marijuana plant which could be grown in the USA was called ‘Sativa’, and required sunny weather in states like California. When American’s brought back Indica, a smaller, cold-resistant plant from the Hindu mountains. Indica and Sativa were cross-bred which allowed cannabis to be grown inside. When the Reagan Administration began using spy planes in the USA to search for marijuana plants from the air, Indica’s smaller size allowed growers to hide their plants indoor. Crossbreeding this with Ruderalis, another plant type, allowed for shorter breeding times.

Perhaps the most significant transformation to the cannabis plant was the rediscovery of a historic breeding method. When female plants are pollinated, their THC production slows down to produce seeds. However, isolating female plants from male plants means they are not pollinated, do not produce seeds and THC production never slows. This process produces ‘sexually frustrated’ female cannabis plants. If you take a clipping from these females, growers could clone a new generation of genetically identical plants, skipping the pollination process. These plants keep producing more and more psychoactive resin, trying to attract pollen so they can make seeds. Sinsemilla (seedless) plants have a THC content roughly 6% higher than normal cannabis.

The THC content for both plants hasn’t changed too much over the past few decades. However, as Sinsemilla has become popularised due to its higher concentrations, confiscated cannabis has also increased in concentration. THC, the pscyhoactive compound, and CBD, the relaxing compound, are connected. The more THC there is in a plant, the less CBD. In the 1990’s, the ratio of THC to CBD was roughly 11:1. Today, this ratio is roughly 250:1. This means there is increased anxiety and paranoia.

The Sativa vs Indica binary has provided the basis for how most marijuana is sold. Sativa’s gives a energetic high feeling, while Indica’s produces a lethargic ‘stoned’ feeling. Extensive cross-breeding has meant that visual identification of plants gives no indication of Sativa or Indica. Consequently, the labels given to the drugs sold is highly subjective. Government’s do not regulate cannabis strains, and furthermore, we do not know what a standard unit of cannabis looks like. Until society has a cannabis equivalent of a drink, people cannot gauge the effects of their activity. Accurate labelling requires standardising a ‘puff’ and ultimately legalising marijuana.

Designer DNA

The debate between somatic and germline gene editing have important distinctions. Somatic cells include most of the cells in the body, such as cells of the skin, blood and brain. These cells are not passed down to offspring. Germline edits involve cells of the sperm, egg or embryo which contain DNA passed to future generations. The difference between the two is significant, where germline editing would effect the global population and human evolution. Another notable divide which has emerged through discourse is the difference between therapy and enhancement. While therapies treat diseases, enhancements create advantages for those already healthy. Treatments for people living with diseases through somatic gene editing techniques is now a common place practice. Illnesses such as sickle cell anemia, HIV and even cancer have been the subject of large scale genetic development.

Somatic gene editing has always been far less controversial than that of germline, primarily due to the closed nature of effect, where any change made ends with the person. Whether this be through medicine or plastic surgery, others are not affected. The large debate has now turned to whether we should move beyond treating the sick, to editing diseases out of future generations. While creating a genetically modified baby is already illegal in over 25 countries, human capacity for temptation knows few bounds. In 2015 the United Nations called for a global moratorium, stating germline techniques could “jeopardize the inherent and therefore equal dignity of all human beings and renew eugenics.”

Image result for designer DNA

The barrier which separates therapy and enhancement has become increasingly blurred over time. There is varied agreement around which genetic conditions need ‘fixing’ i.e. are dwarfism and deafness a disease? One of the most prominent ethical threats in genetics is the concept of ‘designer babies’, where parents may supposedly determine eye colour, intelligence and height, among many other features. However, while some traits are relatively simple, many are incredibly complex, involving hundreds and thousands of targets on the genome. Perhaps the shorter path to designer babies is gene selection.

Pre-implantation genetic diagnosis (PDG) removes cells from embryo’s created through IVF and tests their DNA for disease. Fertility clinics then can select embryo’s without the disease to implant in the women. As this technology develops, it is predicted people will be able to receive an entire genomic report card for their embryo’s. While we may not be able to edit complex genes, instead we will be able to predict them. These are called polygenic scores. The fertility industry could steer the future of human evolution, especially if it becomes possible to grow embryo’s from skin cells. Genetic technologies limit the range of human variation, and furthermore, the high cost associated with genetic alterations means we risk giving the wealthy a genetic advantage. Humans fall prey to simple answers for complex issues in the world of science, however, gene editing gives society a chance to significantly reduce human suffering.

On Quantum Physics (From a Guy Who Definitely Doesn’t Understand Quantum Physics)

If taking physics 203 this semester has taught me one thing, it’s that I’m really not cut out for quantum physics. Throw me relativity and nuclear physics any time, but I’d rather bang my head against an infinite potential square bloody wall than calculate another expectation value by integrating by parts 5 times in a row.

So I don’t enjoy quantum physics, which is not particularly interesting content for a blog post, seeing as none of you really care what I like or don’t like. There is, in fact, a point to this. What I find interesting is that in high school, quantum physics was one of the parts I was most looking forward to doing in a physics degree, right behind astrophysics. And I have to wonder, having slogged through the three hundred integrals required for the course, why this was ever the case when I’m now certain that I have no desire to do any more quantum physics.

I think the most obvious conclusion is Dunning Kruger – for those of you that don’t know, it’s basically the idea that when you have no knowledge of the subject, you think that you’re good at a subject because you don’t know enough to know how bad you are. In high school, you don’t really know anything about anything, and the most complicated quantum you’ve ever done is the photoelectric effect, and the only equation that you think exists is that E=hf. You get the idea that it’s this cool, abstract, modern branch of physics but it really isn’t that bad. You’d be wrong, but that’s high school, and if you were being asked to calculate expectation values in high school that would be a little concerning. To me, there’s a deeper problem, and that comes from our experiences with science when we’re younger.

I imagine that most of you were like me in your interest in science when you were kids, by which I mean that you would read up or watch videos on it in your own time, talk to your friends about it, and generally act like depressing nerds. A lot of these would be in the form of science communication for laymen, filled with nice simplifications and analogies to make it easier to understand. I don’t take issue with science communication, but I think with some of the more complicated fields, such as quantum physics, it can end up giving people the impression that they know more about a subject than they do, especially when no-one really know enough to correct you. I remember having a friend who, whenever he was questioned about something he was saying, would just come out with ‘because quantum effects’ or something, and no-one knew anything about quantum effects, so we all just took his word for it. Hell, I remember being questioned about something in an astronomy presentation I did and coming up with ‘dark energy’ as my reply, which was probably right, but I didn’t at all understand why or how. The point I’m trying to make is that a lot of the science communication we’re exposed to in these more complex, modern fields of science can be somewhat misleading and oversimplified, making us think that we’re much better at something than we really are. I don’t really understand how to fix this, but I’d rather try my hand at it than I would at another Schroedinger equation.

New Zealand comes third place

in adult obesity among all OECD coutries!

You’re either hearing this for the first time and shocked or you are rolling your eyes because you’ve heard this several times before.  

When I learnt about this fact a couple of years ago, I was very surprised. However, it really isn’t that surprising in 2019.

To make matters worse, an article the newspaper Stuff states kiwi kids are now the 2nd highest ranked in obesity among OECD countries.

Evolution of humans has been discussed widely. Apes evolved into humans many centuries ago, and through this process, diets changed as well. The first apes in the world survived off meat they hunted or left-over meat from a bigger carnivore’s meal. This required apes to walk for many miles on hot dry days just to fill their stomachs.

Overview of human evolution

About one million years ago was when early humans discovered cooking. This was from finding ways to use plants effectively and store starch. As food became more available, meaning no more hunting or walking for miles on end.

Access to a whole range of food straight out the door is one of the reasons leading to high amounts of fat stored in human beings than ever before. 

It’s just another modern day problem that we as individuals can fix, but it won’t be easy.  


Links

https://phys.org/news/2015-08-early-human-diet-habits.html

On Alien Life and Why I Really Don’t Care

First off, I’m not claiming that aliens are uninteresting, having never met an alien. I’m sure some of them are fascinating fellows that I’d love to have a good chat with some. But I do view the whole search for alien life to be a fairly ludicrous endeavour, and I’m here as someone completely unqualified to bandy my opinion about this around.

This whole thing was spurred by me reading an article on space.com titled ‘Organic Compounds Found in Plumes of Saturn’s Icy Moon Enceladus’. The actual astronomy of the situation, I think, is very interesting; the moon has a huge ocean beneath its frozen surface and a series of geysers, and when these geysers erupt, they pick up material from the ocean, ejecting it into space, since the moon has no atmosphere. Astronomers can then perform analysis of these ejected materials, during which they determined the presence of organic compounds, many of which are important in forming amino acids on Earth. I was never much of biologist, but from what I’m aware, amino acids are an important thing for forming life, which means that their presence on the moon is indicative of the possibility of life existing.

I have no problems with any of this; finding alien life would be a hugely significant scientific discovery, and there’s no particular harm in theorising based on some reasonable discoveries and our current knowledge of human life. But I do think that it’s odd that the search for alien life has somehow captured the collective consciousness of the planet, and the main reason I think that is that if we discovered alien life, it would mean shit all to the large majority of us. Intelligent alien life, maybe, but something like what’s detailed in the article would be nothing but the beginnings of an evolutionary cycle a few billion years behind ours. To most of us, that would be an entirely uninteresting piece of information that would have no impact on our lives. Yet people still remain so fascinated by the prospect. I think a large reason for this is that it proves that life is possible on other planets, but to me, this isn’t even that interesting. It was possible on our planet, and the universe is infinite. Life was always possible on other planets; it’s no great revelation if we find it, because it’s not exactly expected that we be able to observe it in any case based on the size of the observable (and non-observable, at that) universe.

I don’t think that all science needs to have an immediate impact on the lives of people who know about it, mind you; I want to spend most of my life looking at space things, which is probably the least practical science one can do. But I do think that for something that has garnered such huge interest from the population, the search for alien life is one of the most uninteresting scientific prospects of the modern day. Maybe people like the idea because it means that we aren’t alone, but I think that’s foolish, too. Human life is great, guys. The fact that we exist at all is more interesting to me than any alien life might be.

On the Probability of Dragon Slaying

I’ve recently gotten into Dungeons and Dragons, much to the disdain of my mother, who told me once when I was very young that it would result in me selling my soul to demons. In her defence, she was right, since I immediately made my character a warlock who made a pact for power with Cthulhu, but I digress. More importantly, due to this new hobby, I’ve recently developed something of an interest in dice and probability. Being the unhealthily obsessive optimiser that I am, I’ve been thinking hard about how to maximise my imaginary warlock’s imaginary damage output to try to kill the imaginary dragon slightly faster. This is not a good way to be spending my mental faculties at this time in the semester, but it’s also much more entertaining than studying, so here’s a random insight I’ve had about probability over the course of playing this game: humans are incredibly bad at it.

I’ve become convinced, through some of the conversations that I’ve had with my group, that the human brain is not designed to understand probability. First example: the party’s rogue had an option of either attacking with her bow or her crossbow (these are the sort of boring choices you make when you haven’t sold your soul for magical powers). On a hit, the bow would do 1d6+3 damage (which means roll a six-sided dice, and add 3 to the number rolled), while the crossbow would do 1d8+2 damage. She was convinced that the crossbow was the better option, because a d8 is two higher than a d6, after all, which means that it must be better damage. In fact, the mean value for both weapons is 6.5, so the average damage for both weapons was statistically the same. But there’s something deceptive about maximums that seems to appeal to the human brain, tricking us into believing that the higher value is better. Another point on maximums – you have the option of casting a spell that does 2d12 worth of damage or a spell that deals 4d6. Which should you pick? There are a decent number of people who’d say it doesn’t matter, because both do a maximum of 24 damage, but the average of a 6 sided dice is 3.5, while the average of a 12 sided dice is 6.5, meaning that you’d be dealing 13 damage on average vs 14. Again, the deceptiveness of a maximum value compared to an average.

Even in instances without averages, I’ve seen a decent amount of errors made in the group. The easiest example comes in the death saving throw mechanic, which determines whether an unconscious character lives or dies. When unconscious, roll a d20. On a roll of 10 or higher, you succeed the throw, but if you roll a less than 10, you fail. Almost everyone, when they first heard about this mechanic, thought it was a 50/50 chance, because ten is halfway to 20, but this is not the case; there are 9 values from 1-9 and 11 from 10-20, so the odds are somewhat favourable to success. Yet instinctively, one is fairly prone to simply say 50/50.

This extends out of dice games, though. One famous example is the Monty Hall problem, which I won’t go through in detail here since most of you have probably heard of it and also because I’d rather talk about how much damage I can do with my eldritch blasts. Even when shown the logical processes to solving that problem, though, it is very difficult to understand exactly why it works the way it does; I remember my year 9 math class descending into chaos when we first saw it. Honestly, even trying to understand randomness as a concept can be quite a feat, because the human brain does not seem to naturally understand random things, always searching for order and patterns. This effect results in us, in general, I would suggest, being pretty terrible as far as trying to interpret probability. It’s a sobering thought, to think that there are tasks the human brain isn’t really made to excel at.

The Memory of Fish

Long has the humble fish been epitomised with having an incredibly short attention span and memory, look no further than the critically acclaimed literature of “Finding Nemo” to corroborate this. It turns out that my childhood was a lie, and so was Dory’s portrayal in the sense that she had an incredibly short term memory. Significant evidence has surfaced pointing to the fact that fish may indeed be a lot more capable than we give them credit for, with many scientific publications indicating that fish could have a very reasonable capacity for memory. More recently, some have even claimed that information can be implicitly passed down through generations via DNA. Today I will explore the findings behind the medium and generational memory capacity of our aquatic friends.

Over a medium-term, fish have been shown to recognise various objects and actions long after they last encountered them/it. Examples of these include the Red Sea Clown Fish that was still able to identify their mate after 30 days of imposed separation1, Rainbow trout remembering the association between pushing a button and receiving food 3months after that button had been removed2, and finally an informal experiment performed on a group of channel catfish demonstrated that they could remember a human command (which preceded them being given food) 5 years after they last heard it.3 The fact that very similar findings have surfaced from independent studies on varying species of fish is fascinating, and points towards there being a genuinely large capacity for memory, contrary to popular belief.

Now when we start thinking long term, it starts to get interesting. A recent discovery claims that fish have the ability to pass information down by generations, without any form of physical communication. This is done via DNA methylation, which is essentially the process of adding methyl group(s) to the DNA molecule4. This alters the DNA without actually changing the sequence, hence not causing any mutations etc. The methyl groups alter how the genes act, with researchers reliably linking famine-like experiences in the fish’s past to specific changes in these methyl groups and where they appear. These life experiences can be passed down through generations, as uniquely the methyl groups are not ‘erased’ in (specifically Zebra) fish, unlike most other lifeforms, including humans. It is still unclear exactly how this ‘memory’ would be used by the fish that inherited it, but it certainly opens the door to more extensive studies in which this could be explored in greater depth.

Further to this, it has been found that fish have social intelligence, which is a little beyond the scope of what I wanted to talk about, but here’s a link if you’re interested: http://www.howfishbehave.ca/pdf/Social%20intelligence.pdf

References:

  1. Fricke, H. (1974) Fish cognition: a primate’s eye view
  2. Adron, J.W., Grant, P.T., and Cowey, C.B. (1973) A system for the quantitative study of the learning capacity of rainbow trout and its application to the study of food preferences and behaviour.
  3. Reebs, S.G. (2008) Long-term memory in fishes
  4. Oscar Ortega-Recalde, Robert C. Day, Neil J. Gemmell & Timothy A. Hore (2019) Zebrafish preserve global germline DNA methylation while sex-linked rDNA is amplified and demethylated during feminisation.

5G – time to get out the tin foil hats?

I work in the contact centre for a telecommunications company, which means I have exclusive access to some of the most interesting conversations out there. It started to come to my attention that a decent number of people are incredibly apprehensive about the introduction of this technology, so I thought I’d take a look into it myself. They argue that there are significant health risks involved with the introduction of this technology, specifically because it operates on a much higher frequency, and requires a significantly reduced average proximity to humans in order to be effective.

So what exactly is 5G? 5G is the 5th generation of cellular network technology. We currently use 4G for data communication, and 3G for the majority of calls and SMS messaging. However as more and more technologies are developed, 4G is getting overwhelmed due to its lack of capacity and speed limitation. This is where 5G comes in. 5G will commonly operate on a much higher frequency (30-300GHz), which is a lot less congested than the current frequencies for 3G & 4G (approx 0.5-3.5GHz), thus allowing higher speed for data transfer. It is expected that 5G will offer speeds up to 100x faster than the current 4G systems, meaning that at its peak speed you theoretically could be downloading 50 4K movies from Netflix simultaneously. The drawback of the high frequency is that it cannot travel far, with base stations needing to be installed every 300m or so, making it a lot less economically viable.

At this point I started to think “So what? I don’t really mind waiting a few more seconds for something to load, especially given how expensive the implementation could be.” But there’s more to the implementation than just speed for consumers. One of the key features that the new technologies allows is reduced latency. i.e. it takes less time for a device to receive data back and begin any given process. A good example of this would be a driverless car, if said car is moving through a city, it is constantly processing the information it can see in order to operate safely. If these vehicles were able to instantaneously communicate with traffic lights and other cars to decide how they should act, they could be a lot more reliable, with the increased speed that 5G offers ensuring that it doesn’t take a few extra seconds to forestall and accident or collision. Additionally, the increased capacity means that it is far more practical to support IoT (internet of things) devices, allowing them to communicate and make for seamless interaction with multiple different technologies. An interesting demonstration of how effective the 5G network is was an experiment with 3 robots, controlling a flat plate. They had to wirelessly communicate with each other in order to balance the ball in the centre. This was performed with the 4G and 5G network, taking 11sec and 3sec respectively. This is another demonstration of how the introduction of this new technology could yield improvements in our day to day lives, when applied to other situations.

Robots attempting to balance a ball

As with most major changes in society, 5G was met with a high level of scepticism from many people who express genuine fear that there could be significant health ramifications from the use of this technology. Their argument is that the use of these much higher levels of the EMR spectrum (specifically millimeter waves) can cause skin and eye damage. Although they substantiate these claims with research, there seem to be direct conflicts with larger numbers other sources that did similar investigations. The weighting of evidence pointing towards the fact that these frequencies are harmless has led to the conclusion that any subjects who did display the alleged symptoms probably developed them due to chance rather than the treatment applied. Even with all this in mind, the fact that these higher frequency waves cannot propagate through material very well in the first place is further compelling evidence that there is likely no harm in the introduction of these systems.

There are a whole host of interesting new technologies that will make this possible, and if you’re interested I suggest checking out this link, because I just realised how long this blog entry is.

Anyway, to briefly conclude, I believe there has been enough research to prove we are safe. While we probably don’t need 5G in NZ right now, there are so many cool applications further down the line that this paves the way for, so we should give it a good chance.

Design a site like this with WordPress.com
Get started