On March 15, social media users found it difficult to avoid any news relating to the horrific shooting in Christchurch.
Subscribe now for unlimited access.
$0/
(min cost $0)
or signup to continue reading
This is not uncommon when such a significant event happens in the world.
However, also unavoidable were images and the sharing of the video that the shooter streamed online during the sickening, murderous rampage.
Some media, around the world, used the footage to illustrate stories. They even used the video to lead stories picking up comments made by the shooter and victims during the vicious attack.
Other media refused to use any images or video relating to the shooting.
For 17 minutes the video was live streamed. Within three days of the attack, Facebook removed 1.5 million videos of the attack from its site.
This video was not the purpose of why social media was created and it left various platforms scrambling.
The informal broadcast has prompted countries around the world to examine its social media laws and regulations.
Australia is not immune to this conversation. The Morrison Government is going in strong in the wake of the Christchurch incident.
Facebook, Google and Twitter, plus other social media companies, could all face jail time if their platforms fail to remove terrorist content.
"We need to prevent social media platforms being weaponised with terror content," Mr Morrison said in a statement.
"If social media companies fail to demonstrate a willingness to immediately institute changes to prevent the use of their platforms, like what was filmed and shared by the perpetrators of the terrible offences in Christchurch, we will take action."
These are important conversations to have and encourage corporate responsibility as we continue to discover the effects of the advancement of technology.
There is a responsibility the creators of the technology must take ownership of to ensure messages of hate and crime are not spread around the world without thought or cause.