Fonte: https://musebycl.io/advertising/klarna-gets-veryvery-swedish-loony-first-ads-us-market
A campanha publicitária da imagem é da empresa Klarna, uma empresa sueca de pagamentos. Campanhas utilizam tanto a linguagem verbal quanto não verbal para atrair futuros clientes. Sobre a linguagem usada na propaganda, é possível afirmar:
I. Na expressão Swedish for smoother shopping, o termo Swedish é um substantivo.
II. O adjetivo smoother, em grau comparativo de igualdade, relaciona-se com a textura do uso da cor rosa na imagem.
III. O termo shopping é um verbo e está no gerúndio.
IV. O adjetivo smoother poderia ser substituído, sem mudança de significado, para o adjetivo easier.
As alternativas corretas são:
The Illusion of Free Choice in the Age of Augmented Decisions
The age of digitalization has created new opportunities for individuals, organizations, local governments, and countries to cooperate and mutually benefit from each other. Technologies such as smartphones and mobile Internet have enabled global networks and extended opportunities for individual and collective engagement and cooperation. Further, tasks that formerly meant tedious, long-lasting work or that could not be accomplished at all have become possible and even trivial with the extensive use of constantly improving technologies. However, more convenience has led to a growing reliance on these types of Technologies in human decisions. In the augmented world in which we live, a growing number of decisions are designed by smart technologies – with unforeseen consequences for individuals and societies. Augmented decision-making undermines the freedom of choice. This is the price we pay for convenience.
The dark side of digital convenience: There is a darker and often invisible side of the coin as well.
Loss of freedom of choice: Augmented intelligence frees us from many chores, but it also limits free choice. We rely on our technologies, often unaware that we do no longer get the full picture but instead a reality that might be curated for a specific purpose. In such cases, freedom of choice becomes an illusion. Humans have become accustomed to “doing everything” on their smartphones, and this tendency is reinforced by the apps and services of organizations such as Facebook, Google, and Netflix. Tech companies use technology as a vehicle to construct individual subjective reality, the internal space that frames our decision-making. Most of the information that humans base their decision on is filtered and presorted by algorithms, which use huge amounts of user data to produce highly individualized recommendations to nudge us towards certain options. While such algorithms make our lives more convenient, they also fulfill various organizational objectives that users may not be aware of, and that may not be in their best interest. We do not know whether algorithms augmenting human decisions truly optimize the benefit to their users or rather the return on investment for a company. In other words, producing a positive user experience is often a means to an end, not an end in itself.
Polarization of beliefs: A potential cause of harm to societies and democracies is the emergence of information bubbles, enabling and strengthening the polarization of beliefs. Biased outcomes shape our identities, our view of the world, our social relationships, and most importantly, the decisions we make. For instance, YouTube alone accumulates in total more than one billion hours of watchtime a day, and 70% of this time comes from watching recommended videos. Smart algorithms instantaneously and simultaneously recommend millions of videos to its users. At the same time, they test how to best retain user attention. Once a user continues to view another video, the recommendation was successful, and the algorithm has controlled the user’s decisionmaking process. Under these carefully designed circumstances, humans may lose the ability to consciously choose between freely exploring or stopping to explore the content on the platform. Free choice is competing against smart algorithms that track and use individual preferences, while the user cannot control or does not fully understand the purpose and functionality of these algorithms. If such an algorithm learns that conspiracy videos are optimizing user attention, it may continue to recommend such videos until even radical conspiracy theories become a kind of shared reality for users. What they consume affects how the users think and behave. Even though users decide what they watch, YouTube’s algorithms, and also Facebook’s and Twitter’s, have a large influence on what content – and what ideas and opinions – get amplified or silenced.
https://www.nim.org/en/dokument/2021mirdarksidesarticle freechoiceeng
No texto de Fabian Buder, Koen Pauwels, and Kairun Daikoku, os autores afirmam o seguinte: “A potential cause of harm to societies and democracies is the emergence of information bubbles, enabling and strengthening the polarization of beliefs. Biased outcomes shape our identities, our view of the world, our social relationships, and most importantly, the decisions we make.”
O termo em negrito, biased, pode ser substituído, sem alteração de significado, por qual das alternativas a seguir?
The Illusion of Free Choice in the Age of Augmented Decisions
The age of digitalization has created new opportunities for individuals, organizations, local governments, and countries to cooperate and mutually benefit from each other. Technologies such as smartphones and mobile Internet have enabled global networks and extended opportunities for individual and collective engagement and cooperation. Further, tasks that formerly meant tedious, long-lasting work or that could not be accomplished at all have become possible and even trivial with the extensive use of constantly improving technologies. However, more convenience has led to a growing reliance on these types of Technologies in human decisions. In the augmented world in which we live, a growing number of decisions are designed by smart technologies – with unforeseen consequences for individuals and societies. Augmented decision-making undermines the freedom of choice. This is the price we pay for convenience.
The dark side of digital convenience: There is a darker and often invisible side of the coin as well.
Loss of freedom of choice: Augmented intelligence frees us from many chores, but it also limits free choice. We rely on our technologies, often unaware that we do no longer get the full picture but instead a reality that might be curated for a specific purpose. In such cases, freedom of choice becomes an illusion. Humans have become accustomed to “doing everything” on their smartphones, and this tendency is reinforced by the apps and services of organizations such as Facebook, Google, and Netflix. Tech companies use technology as a vehicle to construct individual subjective reality, the internal space that frames our decision-making. Most of the information that humans base their decision on is filtered and presorted by algorithms, which use huge amounts of user data to produce highly individualized recommendations to nudge us towards certain options. While such algorithms make our lives more convenient, they also fulfill various organizational objectives that users may not be aware of, and that may not be in their best interest. We do not know whether algorithms augmenting human decisions truly optimize the benefit to their users or rather the return on investment for a company. In other words, producing a positive user experience is often a means to an end, not an end in itself.
Polarization of beliefs: A potential cause of harm to societies and democracies is the emergence of information bubbles, enabling and strengthening the polarization of beliefs. Biased outcomes shape our identities, our view of the world, our social relationships, and most importantly, the decisions we make. For instance, YouTube alone accumulates in total more than one billion hours of watchtime a day, and 70% of this time comes from watching recommended videos. Smart algorithms instantaneously and simultaneously recommend millions of videos to its users. At the same time, they test how to best retain user attention. Once a user continues to view another video, the recommendation was successful, and the algorithm has controlled the user’s decisionmaking process. Under these carefully designed circumstances, humans may lose the ability to consciously choose between freely exploring or stopping to explore the content on the platform. Free choice is competing against smart algorithms that track and use individual preferences, while the user cannot control or does not fully understand the purpose and functionality of these algorithms. If such an algorithm learns that conspiracy videos are optimizing user attention, it may continue to recommend such videos until even radical conspiracy theories become a kind of shared reality for users. What they consume affects how the users think and behave. Even though users decide what they watch, YouTube’s algorithms, and also Facebook’s and Twitter’s, have a large influence on what content – and what ideas and opinions – get amplified or silenced.
https://www.nim.org/en/dokument/2021mirdarksidesarticle freechoiceeng
Sobre o texto de Fabian Buder, Koen Pauwels, and Kairun Daikoku, analise as seguintes asserções:
( ) Empresas recolhem nossos dados por meio da oferta de facilitação de tarefas, que antes eram consideradas tediosas e longas.
( ) Redes sociais como Facebook, Youtube e Twitter possuem controle de divulgação apenas de conteúdo de entretenimento.
( ) Mesmo que estejamos acostumados pela conveniência que aplicativos nos oferecem, nossos dados estão protegidos pelas empresas que os criaram.
( ) A livre escolha, na Era da Conveniência Digitalizada, é uma competição contra algoritmos que rastreiam e usam preferências individuais para a seleção e divulgação de conteúdo.
A sequência correta é:
The Illusion of Free Choice in the Age of Augmented Decisions
The age of digitalization has created new opportunities for individuals, organizations, local governments, and countries to cooperate and mutually benefit from each other. Technologies such as smartphones and mobile Internet have enabled global networks and extended opportunities for individual and collective engagement and cooperation. Further, tasks that formerly meant tedious, long-lasting work or that could not be accomplished at all have become possible and even trivial with the extensive use of constantly improving technologies. However, more convenience has led to a growing reliance on these types of Technologies in human decisions. In the augmented world in which we live, a growing number of decisions are designed by smart technologies – with unforeseen consequences for individuals and societies. Augmented decision-making undermines the freedom of choice. This is the price we pay for convenience.
The dark side of digital convenience: There is a darker and often invisible side of the coin as well.
Loss of freedom of choice: Augmented intelligence frees us from many chores, but it also limits free choice. We rely on our technologies, often unaware that we do no longer get the full picture but instead a reality that might be curated for a specific purpose. In such cases, freedom of choice becomes an illusion. Humans have become accustomed to “doing everything” on their smartphones, and this tendency is reinforced by the apps and services of organizations such as Facebook, Google, and Netflix. Tech companies use technology as a vehicle to construct individual subjective reality, the internal space that frames our decision-making. Most of the information that humans base their decision on is filtered and presorted by algorithms, which use huge amounts of user data to produce highly individualized recommendations to nudge us towards certain options. While such algorithms make our lives more convenient, they also fulfill various organizational objectives that users may not be aware of, and that may not be in their best interest. We do not know whether algorithms augmenting human decisions truly optimize the benefit to their users or rather the return on investment for a company. In other words, producing a positive user experience is often a means to an end, not an end in itself.
Polarization of beliefs: A potential cause of harm to societies and democracies is the emergence of information bubbles, enabling and strengthening the polarization of beliefs. Biased outcomes shape our identities, our view of the world, our social relationships, and most importantly, the decisions we make. For instance, YouTube alone accumulates in total more than one billion hours of watchtime a day, and 70% of this time comes from watching recommended videos. Smart algorithms instantaneously and simultaneously recommend millions of videos to its users. At the same time, they test how to best retain user attention. Once a user continues to view another video, the recommendation was successful, and the algorithm has controlled the user’s decisionmaking process. Under these carefully designed circumstances, humans may lose the ability to consciously choose between freely exploring or stopping to explore the content on the platform. Free choice is competing against smart algorithms that track and use individual preferences, while the user cannot control or does not fully understand the purpose and functionality of these algorithms. If such an algorithm learns that conspiracy videos are optimizing user attention, it may continue to recommend such videos until even radical conspiracy theories become a kind of shared reality for users. What they consume affects how the users think and behave. Even though users decide what they watch, YouTube’s algorithms, and also Facebook’s and Twitter’s, have a large influence on what content – and what ideas and opinions – get amplified or silenced.
https://www.nim.org/en/dokument/2021mirdarksidesarticle freechoiceeng
Fabian Buder, Koen Pauwels e Kairun Daikoku utilizam uma conhecida expressão em língua inglesa – a means to an end – e estendem seu significado para que possam enquadrar no objetivo de seu texto.
Sobre o uso e a estrutura dessa expressão no texto, analise as seguintes afirmações.
I. A means to an end é um phrasal verb, composto de um verbo (means) e uma preposição (end).
II. Na expressão A means to an end, o termo end é um verbo e significa finalização de uma situação.
III. A expressão pode significar um processo ou atividade realizada para se atingir um objetivo.
IV. A expressão está conjugada na terceira pessoa, conforme aponta o uso do -s no final da palavra means.
V. No texto, os autores usam a expressão para criticar empresas que omitem o que fazem com os dados de seus usuários.
As afirmações corretas são:
Fonte: https://cse.buffalo.edu/~rapaport/510/algcartoons.html
Na charge de Dilbert, há o uso das aspas na palavra algorithm.
Sobre esse uso específico na imagem em questão, assinale a alternativa que descreve corretamente o uso das aspas.