BMGT3001 Governance and Business Ethics

By Support

BMGT3001 Governance and Business Ethics

Unit Learning Outcome Graduate Attributes Assessed
ULO 1: Examine the contribution of an ethical GA 1: Communication
Schema to the good governance of the organization GA 2: Collaboration
ULO 3: Evaluate ethical dilemmas in the global GA 3: Research
environment based on the knowledge of diverse GA 4: Critical Thinking
cultural and philosophical traditions that influence GA 5: Ethical Behavior
behavior. GA 6: Flexibility


Each group will chose a case study related to any ethical dilemma about
governance and operations of any current issue affecting at the local, national
or global scale.
 The case study must not be from the text book.
 The case study must relate to any weekly topic.
 The case study must not be older than 10 years
 Research the history, analyze the situation, including all relevant stakeholders
and their actions, using a variety of relevant ethical frameworks.
 Introduction-
History/background: What is the issue at hand?
To what topic of ethics does it relate?
Where is the issue prevalent? Why is it important?
 A brief history of the organization
 Discussion of the case
 Ethical Decision Making Approaches & Theories
o Explain which ethical decision-making approaches and theories
you relied upon to reach your conclusions and why.
 Summary/Conclusion
o Restate the importance of the issue





Misuse of personal Information


History/background: What is the issue at hand?

“Personal data” means any information relating to an identified or identifiable natural person (“data subject”). an “identifiable person” is one who can be identified, directly or indirectly, particularly by reference to an identifier such as location data, an identification number, a name, an online identifier or to one or more factors specific to the cultural, physiological, social, mental, economic, physical, or genetic identity of that natural individual (Janeček, 2018).  There is no one hundred percent protection against data misuse. Those who want to get hold of data usually have an easy time of it. Most often, criminals find people’s data on the Internet. After all, people leave behind numerous traces on the Internet: access data for e-mail portals and messenger services, online banking, or various stores. Consumers read about data leaks in forums almost every day: Information such as user names, real names, and e-mail addresses are captured and offered for sale – even on the darknet. Sometimes the perpetrators hijack entire user accounts.

Data protection is a term that emerged in the second half of the 20th century and is sometimes demarcated and understood differently. Depending on how it is viewed, data protection is  understood as privacy protection, protection of personality in data processing, protection of the right to informational self-determination, and protection against inappropriate data processing (Custers et al., 2019). Data protection is often viewed as the right of all people to decide for himself or herself which of his or her personal data should be accessible to whom and when. The principle of such a data protection right is that the unfairness of power between organizations and individuals can be made subject to conditions. Data protection is intended to neutralize the predisposition in the progressively networked and digital information society toward the advent of data monopolies by private companies, the proliferation of state surveillance measures, and transparent individual.

The protection of personal data and reverence for privacy are essential rights. The federal government has always emphasized the need to strike a balance between consolidating security and respecting human rights, which includes data protection and privacy. Some data protection rules have recently come into force, firming citizens’ rights and abridging rules for businesses in the digital era. All citizens have the right to know which personal data concerning them the Australian Parliament holds, and they have the right to object to processing, right to request their modification or removal, the right to data portability, and to request their restriction of processing (Mansted & Logan, 2020). If the processing is based on consent or explicit consent, the data subject also has the right to withdraw their consent at any time, although this does not impact the legality of the processing conducted based on the consent until the withdrawal.

To what topic of ethics does it relate?

The topic relates to the ethical issue of privacy. The social discourse on privacy has so far focused very much on the individual and their rights (and their infringements), with digital threats being difficult to grasp at this level. In the process, people lose sight of how fundamentally privacy regulates their social, economic, and political relationships, especially in the digital society. From an ethical perspective, profiling seems problematic to undermine norms of public equality that protect individual interests. The liberal challenge is that protection concepts focused on the individual, especially “informational self-determination,” are no longer effective when privacy is transformed from an individual right to a collective good.

The digital revolution is advancing so rapidly that our language sometimes lags behind reality: For example, the expression “going online” hardly makes sense today because the Internet has become the infrastructure for almost all of our everyday activities: Communication, information, shopping, entertainment, account management, political participation, organizing, job searching and much more now take place in virtual environments where we remain present with our profiles even when we are “offline.” Anyone who carries a smartphone with them is permanently “online.” But people have also become vulnerable (Amsyar, Christopher, Dithi, Khan, & Maulana, 2020).  Their digital existences are expanding tremendously in virtual space, as every digitally mediatized activity collects and stores personal data, often in foreign countries. It is becoming increasingly difficult to control the information that constitutes their identity.

Digitization is far from over. Everybody is currently on the threshold of the “Internet of Things,” which will further increase the amount of personal information released. The consequences will not only affect people’s social relationships; this is where they can most easily expect flexible adaptations to the new world of life (Alferidah & Jhanjhi, 2020). Far more problematic is the fact that the Internet is usually followed closely by the market and the state, that is, by profit and power interests. Privacy, hitherto an essential element of a social balance of power between individuals, business, and the state, is an obstacle here; it is more or less deliberately undermined by business and the state: Collective investment in the capability to fuse data is many times greater than investment in technologies that will enhance privacy.

This imbalance can really only be addressed by new rules. Indeed, the digital revolution has become a controversial political issue for about half a decade, especially since the Snowden revelations in 2013. On the one hand, states have continued to expand their digital surveillance capabilities, especially in the countries of the global South. At the same time, however, resistance has grown (Robbins, 2021).

Where is the issue prevalent? Why is it important?

The Internet forgets nothing: Everything that has found its way onto the net stays there, especially when third parties copy, store, and redistribute data. (Eichhorn, 2020) Services demand personal data when logging in, and app installations require extensive access authorizations in order to be used at all. During daily surfing, mobile communication with the smartphone, shopping on the Internet, or registering for online games, users leave behind information and data that can be further used commercially by the Internet provider or display services. In social networks, it is often unclear what information other users can see. The information and data exchange between different services is equally opaque, especially when links or single sign-on (sometimes called “single sign-on”) are used.

Again, this happens without the user’s knowledge – the providers then refer to the general terms and conditions (GTC) or data protection declarations. However, very few people have read these, and the terms and conditions are incomprehensible. In general, a distinction must be made between providers and other users’ misuse of personal data. This also involves various risks, namely

  • the further commercial use of the data, for example, the reading of one’s own data by providers for advertising purposes,
  • The tapping of sensitive data by other users for criminal purposes,
  • The risk of being harassed or insulted by other users through the misuse of personal information.

Cybercriminals have long since moved beyond targeting large corporations. Thanks to new attack methods, smaller companies are also increasingly falling victim to attacks. Data is encrypted, or company networks are paralyzed in order to extort a ransom (Lallie et al., 2021). Therefore, effective protective measures against data misuse are also a must for small and medium-sized companies. Studies show that around 20 percent of people in Australia have already been victims of data misuse. So almost one in five Australians has already been forced to deal with the issue of identity theft. And often, the police are powerless. But it does not just affect private individuals. The focus of hackers is mostly on companies, where user data is stolen and used for their own interests. The economic damage is great for both companies and private individuals. Thus, it is important to explore this issue to alleviate the damage the issue causes to the victims.

A brief history of the organization

Back in 2003, as a young Harvard psychology and computer science student, Mark Zuckerberg caught programmed his own site and putting it online: The site featured two randomly selected pictures of female students from the school’s files, and the user was allowed to decide which of the two was more attractive. However, the site did not last long because Zuckerberg had posted the images online without the consent of the individuals. Shortly after, on the advice of an acquaintance, the Winklevoss twins, Cameron and Tyler, asked him to help them make their page a reality. Together with Divya Narendra, the two wanted to open an online site for Harvard, where people could exchange ideas and possibly find life or study partners. In the beginning, Zuckerberg seemed taken with the idea, but in the process formed his own wishful thinking of networking the entire world. Allegedly, he had probably had this before. So he pushed the Winklevoss twins’ project aside to promote his own website: (Hall, 2010).

With the help of Dustin Moskovitz, Chris Hughes, and Eduardo Saverin, Zuckerberg succeeded in putting his site online for all Harvard students on February 4, 2004, under the motto “Facebook – An Open and Connected World.” The students quickly well received the Facebook (meaning yearbook). Zuckerberg reacted quickly and made his site available to all Ivy League universities just one year later. Despite various lawsuits from the Winklevoss twins, who accused him of stealing their idea, Zuckerberg continued to shape his project. Still, in 2004, he met Sean Parker – one of the co-founders of the music streaming service Napster. Talking to Parker, who encouraged him and offered support, he decided to drop the “the” from the name to make his site appear clearer (Phillips, 2007).

The community continued to grow. By the end of 2005, Facebook already boasted six million users. The following year, Zuckerberg launched the site for mobile use as well. In September of the same year, he also expanded registration so that anyone over the age of 13 could create their own profile and post messages on the wall of other users.

To keep Facebook entertaining and give its users even more offerings, third-party applications were enabled. However, through these new gaming and communication apps, the personal data of the users accessing them could be viewed. The rapid growth of the range resulted in more and more unpleasant feedback from users. Zuckerberg and his team reacted quite quickly by blocking various apps when they violated’s existing policies. Nevertheless, the topic of data privacy remains contentious with regard to Facebook.

Discussion of the Case

In March of 2018, The New York Times and The Guardian reported that Donald Trump’s campaign team had hired the firm Cambridge Analytica to collect Facebook data to predict and influence users’ voting behavior. The data analytics firm was founded in 2013 by conservative hedge fund manager Robert Mercer and later Trump campaign strategist Steve Bannon as a subsidiary of the British firm Strategic Communications Laboratory. More than 50 million Facebook users were affected by the data theft, according to initial reports. Some of them had voluntarily revealed the information when they took part in a personality test on Facebook. Neuroscientist Aleksandr Kogan from Cambridge University in the UK developed the “thisisyourdigitallife” app for this test; it has to be added to Facebook separately, and users thus allow access to their data (Brown, 2020). However, the app allowed Kogan to tap not only the data of those who took the test, but also that of their Facebook friends. Kogan sold his treasure – and broke Facebook’s rules in the process – to Cambridge Analytica. All this blew up because Cambridge Analytica co-founder Christopher Wylie no longer wanted to contribute, as he said, to the “tool of psychological warfare.” Wylie reported the company’s actions to all major U.S. media outlets: “We invested over a million dollars. So it wasn’t cheap, but it was acceptable for the amount of data. We got the information unusually quickly, relatively cheaply, and it was of high quality. If you want to influence U.S. elections, you can get everything you need there from a single source.”

This was a huge breach of trust for Facebook’s more than two billion users and investors. The company’s stock value fell 18 percent in a matter of days, reducing its market value by about $80 billion (Ehondor, & Ogbu, 2020). The hashtag #deletefacebook – “delete Facebook” – went viral. Although only about four percent of users actually said goodbye to the platform, according to a Reuters poll, the pressure on company founder Mark Zuckerberg grew. He had already promised better data protection in 2011 in a dispute with the antitrust authorities. Critics accused him of firstly not keeping the promise and secondly not doing enough to combat Facebook data theft by Cambridge Analytica and other app developers. And what did Zuckerberg do to control the damage? He kept quiet at first. But as pressure continued to mount on his company, he embarked on an apology tour of U.S. media outlets: “This was a huge breach of trust, and I’m sorry it happened. We have a responsibility to protect people’s data (Hinds, Williams, & Joinson, 2020). If we can’t do that, we don’t deserve to serve people. Now we have to make sure this doesn’t happen again.” Zuckerberg promised to invest more in data protection and develop additional regulations from now on. To do so, he asked for time in a Vox interview: “It’s going to take a couple of years when it comes to solving the problems and getting out of this slump, because we haven’t invested enough in this area so far. I wish I could solve it in three, six months, but the fact is, it’s going to take a while.”

Ethical Decision Making Approaches & Theories

The object of ethics is morality. Therefore, theories that deal with the various aspects of morality can be called ‘ethics.’ However, there are very different approaches to this phenomenon. In this respect, different types of ethics or moral theories can be distinguished from each other. On the one hand, there are normative ethics, which formulate and attempt to justify moral judgments, and on the other hand, there are descriptive ethics, which do not make moral judgments, but merely describe their object, morality, in its various aspects and manifestations (Lourie, Le Bras, & Choi, 2021).

To reach my conclusions, I relied on the justice ethics theory and utilitarianism. According to the justice theory, decision-making should focus on decisions that are fair to everybody involved. This implies that ethical decisions must align with the ethical theory unless extenuating circumstances that can be justified exist in the case. The idea of society is a fair system of cooperation between free and equal persons. Fair conditions of cooperation are not determined by appeal to divine laws, natural law, or moral intuitions but by an agreement to which free and equal persons agree. Fair conditions of cooperation (principles of justice) are obtained only if the agreement is fair and reached under fair conditions. Therefore, one must define a fair initial state for choosing principles of justice. This fair initial state is the original state. It is a particular interpretation of the initial state found in every contractual theory.

In connection with the COVID 19 pandemic, the argument has been heard repeatedly that health protection must take precedence over data protection in any case and be fair or just in their dealings. Although this statement is striking in its generality, it is not correct from either an ethical or a legal point of view. A pandemic can certainly provide a reason for temporarily accessing health data. However, care must be taken to ensure that this is done only to the extent absolutely necessary and to determine the broader context in which these data are used.

The basis of utilitarianism is the principle of utility. Utility in the sense of utilitarianism is generally considered to be the maximization of pleasure (“positive utilitarianism”) and the minimization of suffering (“negative utilitarianism”) (Igor, & Konstantin, 2020). Therefore, the basic thesis of utilitarianism is positively stated: The consequences of an action should bring about the greatest possible happiness, for the greatest possible number, of those affected by the action. In negative formulation: The consequences of an action should cause as little suffering as possible, for as few as possible.  A person who misuses personal information harms the greatest number of people, for example, the FB users, a simultaneously benefit himself – the minority.

According to utilitarianism, what matters is the well-being of each individual. Suppose we do not consider the interests of other beings who can have negative and positive experiences in our moral decisions. In that case, we neglect, at the same time, the absolute sum of happiness (Okorie, & Badejo, 2020). Discriminating against others by stealing their data has positive and negative experiences or a preference is incompatible with theories such as utilitarianism. This is because the latter must take into account all suffering and happiness.

Moreover, utilitarianism does not accept that we do nothing about the suffering of others, even if we did not cause it ourselves. Rather, according to utilitarianism, we should be concerned about the happiness of all those who can feel happiness. If something reduces the happiness of sentient beings, we should fight it no matter what. Everybody should therefore take an active role in curbing and fighting against the misuse of personal information.


Restate the importance of the issue

Data protection refers to the protection of individuals against the misuse of personal data. The term was also used to protect scientific and technical data against loss or alteration – and protection against theft of this data (Ducato, 2020). Today, the term usually refers to the protection of personal data. In the case of personal data, it has also been used to refer to protection against “data corruption.”

The importance of data protection has grown steadily since the development of digital technology because data processing, data collection, data storage, data transfer, and data analysis are becoming increasingly simple. Technical developments such as the Internet, e-mail, mobile telephony, video surveillance, and electronic payment methods create new opportunities for data collection. Both government agencies and private companies have an interest in personal information. Security authorities, for example, want to improve the fight against crime through dragnet searches and telecommunications surveillance, while financial authorities are interested in bank transactions to uncover tax offenses. Companies hope to achieve greater efficiency through employee monitoring, customer profiles are intended to help with marketing, including price differentiation, and credit agencies ensure customers’ solvency. This development is contrasted by certain indifference on the part of large sections of the population, whose eyes data protection has little or no practical significance.

Data protection, that is, personal data protection, safeguards individuals’ fundamental right to informational self-determination (Abeler, Bäcker, Buermeyer, & Zillessen, 2020). A personal data breach occurs when the data for which your company/organization is responsible is affected by a security incident that results in a breach of confidentiality, availability or integrity. If this occurs and the breach is likely to pose a risk to the rights and freedoms of an individual, your company/organization must notify the supervisory authority without undue delay after it becomes aware of the breach. If your company/organization is a processor, it shall notify any personal data breach to the controller. If the personal data breach results in a high risk to the data subjects, they must also be notified unless effective technical and organizational measures or other measures have been implemented to ensure that this risk is no longer likely to occur. It is crucial to take appropriate technical and organizational measures as an organization to prevent possible personal data breaches.

A personal data breach can also be understood as a breach of security leading to the destruction, loss or alteration, whether accidental or unlawful, or to the unauthorized disclosure of or access to personal data transmitted, stored or otherwise processed. Based on this broad definition, processing of personal data that occurs in the daily course of business regularly qualifies as such a breach. For example, the final deletion of an email containing customer data, the sending of an email to the wrong recipient or the incorrect entry in systems containing personal data are covered.

As a first step, it is therefore important to document these processes in an incident register. In most cases, however, simple violations do not trigger an obligation to notify the data protection authorities or the data subject. The decisive factor for the obligation to notify is the risks for the data subjects as specified in the constitution. If the breach is not likely to result in a risk to the rights and freedoms of natural persons, notification is not necessary. In other cases, a notification must be made immediately.

Paint the picture of the world if your plan is or not implemented

Implementing my plan will create a society where respect for personal information is observed.  Many things in daily life are now only available online. These include personal documents such as photos, the music collection, letters of application, and the tax return. How big a loss this data can usually only become clear when it is actually gone. So it’s important to protect data against the attack and ensure that it can be recovered in the event of a loss. According reports, attacks with extortion Trojans have increased significantly in recent years.

Whether on social networks or shopping online – online shoppers should be sparing with their own data. An individual should never disclose more than is absolutely necessary. This is the only way to prevent personal information from falling into the wrong hands. This also applies to e-mail addresses. Online merchants, for example, always need an e-mail address in order to be able to send an order confirmation, but one should be especially careful with forums or competitions. In addition, spam e-mails in the inbox are often very annoying. To avoid them, the e-mail address should not be given too lightly.

What is more, when shopping online, payment by credit card or direct debit are common methods. The shopper should never transmit payment data without encryption. Otherwise, the data could easily be intercepted and misused for other purposes. Many people don’t know that when sending unencrypted data, many online shoppers violate their bank’s terms and conditions. To be on the safe side, it is therefore advisable that you always check whether a lock symbol or an “https” is displayed in the URL in the browser window before sending payment data. The symbol indicates encrypted data transmission.


Abeler, J., Bäcker, M., Buermeyer, U. and Zillessen, H., 2020. COVID-19 contact tracing and data protection can go together. JMIR mHealth and uHealth8(4), p.e19359.

Alferidah, D.K. and Jhanjhi, N.Z., 2020. A Review on Security and Privacy Issues and Challenges in Internet of Things. International Journal of Computer Science and Network Security IJCSNS20(4), pp.263-286.

Amsyar, I., Christopher, E., Dithi, A., Khan, A.N. and Maulana, S., 2020. The Challenge of Cryptocurrency in the Era of the Digital Revolution: A Review of Systematic Literature. Aptisi Transactions on Technopreneurship (ATT)2(2), pp.153-159.

Brown, A.J., 2020. “Should I Stay or Should I Leave?”: Exploring (Dis) continued Facebook Use After the Cambridge Analytica Scandal. Social Media+ Society6(1), p.2056305120913884.

Custers, B., Sears, A.M., Dechesne, F., Georgieva, I., Tani, T. and Van der Hof, S., 2019. EU personal data protection in policy and practice (Vol. 29). Springer.

Ducato, R., 2020. Data protection, scientific research, and the role of information. Computer Law & Security Review37, p.105412.

Ehondor, B.A. and Ogbu, S.U., 2020. Personal Data Protection and Facebook Privacy Infringements in Nigeria. Journal of Leadership, Accountability and Ethics17(2), pp.142-156.

Eichhorn, K., 2020. Why an internet that never forgets is especially bad for young people. MIT Technology Review123(1).

Hall, M., (2010). Facebook, American company. Britannica. Retrieved from

Hinds, J., Williams, E.J. and Joinson, A.N., 2020. “It wouldn’t happen to me”: Privacy concerns and perspectives following the Cambridge Analytica scandal. International Journal of Human-Computer Studies143, p.102498.

Igor, K. and Konstantin, S., 2020. Epistemological foundations of early legal utilitarianism. Wisdom, (1 (14)), pp.31-44.

Janeček, V., 2018. Ownership of personal data in the Internet of Things. Computer law & security review34(5), pp.1039-1052.

Lallie, H.S., Shepherd, L.A., Nurse, J.R., Erola, A., Epiphaniou, G., Maple, C. and Bellekens, X., 2021. Cyber security in the age of covid-19: A timeline and analysis of cyber-crime and cyber-attacks during the pandemic. Computers & Security105, p.102248.

Lourie, N., Le Bras, R. and Choi, Y., 2021, May. Scruples: A corpus of community ethical judgments on 32, 000 real-life anecdotes. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 35, No. 15, pp. 13470-13479).

Mansted, K. and Logan, S., 2020. Citizen data: a centrepoint for trust in government and Australia’s national security. Fresh Perspectives in Security, pp.6-12.

Okorie, N. and Badejo, O.O., 2020. How Impartialist is the Utilitarian Principle of Utility. International Journal of Humanities and Social Science10(10).

Phillips, S., (2007). A brief history of Facebook. The Guardian. Retrieved from

Robbins, S., 2021. Bulk data collection, national security and ethics. In Counter-Terrorism. Edward Elgar Publishing.