Journal articles: 'Multi-user content distribution' – Grafiati (2024)

  • Bibliography
  • Subscribe
  • News
  • Referencing guides Blog Automated transliteration Relevant bibliographies by topics

Log in

Українська Français Italiano Español Polski Português Deutsch

We are proudly a Ukrainian website. Our country was attacked by Russian Armed Forces on Feb. 24, 2022.
You can support the Ukrainian Army by following the link: https://u24.gov.ua/. Even the smallest donation is hugely appreciated!

Relevant bibliographies by topics / Multi-user content distribution / Journal articles

To see the other types of publications on this topic, follow the link: Multi-user content distribution.

Author: Grafiati

Published: 4 June 2021

Last updated: 1 February 2022

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Consult the top 48 journal articles for your research on the topic 'Multi-user content distribution.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Li, Hao, Cheng Yang, and Jia Yin Tian. "OMA DRM-Based Key Management Protocol for IPTV." Applied Mechanics and Materials 733 (February 2015): 815–20. http://dx.doi.org/10.4028/www.scientific.net/amm.733.815.

Full text

Abstract:

Nowadays, with the development of hardware platform, digital content can be presented in more varieties of forms. The authorized users not only watch the authorized TV programs, but also are willing to do it via mobile phones and I-pad. Although the contemporary OMA DRM system provides an approach named domains to adapt multi-screens and services about the authorization of offline digital content, standard PKI procedures used to authenticate the DRM Agent, with the calculation and power consumption relatively larger, are not beneficial for the mobile terminal. Meanwhile, as the public key distribution is complex and the authorization is difficult, it is also not helpful for applied environment of multi-screens. Based on the authentication of hardware devices, the uniqueness of hardware ID also cannot meet the actual demand of multi-screen user. In this paper, we will propose a new protocol of key distribution based on OMA DRM. The protocol meets the requirements of low power consumption and multiple-screen, while providing the feasible security.

APA, Harvard, Vancouver, ISO, and other styles

2

Komarov, Vladimir, Albert Sarafanov, and Sergey Tumkovskiy. "Comparison of the effectiveness of methods to increase the throughput of experimental equipment with remote access." Information and Control Systems, no.6 (December27, 2019): 68–76. http://dx.doi.org/10.31799/1684-8853-2019-6-68-76.

Full text

Abstract:

Introduction: The rapid development of the information society is expressed in the emergence of new models of economic activity, forms of providing educational and social services, scientific activities, etc. оn the basis of constantly improving digital technologies. This, in turn, leads to the emergence of new requirements for knowledge and skills of modern engineers whose preparation is based on various laboratory studies. The modern form of realization of such researches is multi-user remote access from digital educational environment to specialized experimental equipment (laboratory installations/stands/layouts), providing efficient use of this equipment. Purpose: Comparative analysis of methods for increasing the throughput of experimental equipment with multi-user remote access. Methods: Management of user request service procedure based on scheduling algorithms which take into account the functional and parametric content of the processed requests (configuration settings of the object of study, type of measurements taken, parameters of test signals, etc.), as well as the features of the hardware construction. Results: On the basis of the proposed approach for multi-user distributed measuring-control systems the following methods are developed: 1. A method of minimizing control operations which determines the sequence of retrieving jobs from the queue in accordance with the minimum of total control time for all the requests currently in the queue. 2. A method of temporary division of multiple measurements, providing distribution of operations of statistical processing of measurement results between the software on a measuring-control computer and the user terminal. 3. A method of parallelizing functional operations which reduces the time to service the requests by programmatically splitting and concurrently performing the management and measuring operations for queued queries related to different control objects. A comparative analysis of the applied approaches has shown that the most effective, in terms of the cost of equipping a single user workplace, are methods based on managing the process of servicing the user requests. Practical relevance: The developed methods have made it possible to create a number of samples of multi-user distributed measuring-control systems for the automation of educational and scientific experimental researches with a 16–40% lower cost of equipping a workplace and with throughput of 30–50 concurrent users on the basis of one set of specialized experimental equipment.

APA, Harvard, Vancouver, ISO, and other styles

3

Wu, Zhijun, Yun Zhang, and Enzhong Xu. "Multi-Authority Revocable Access Control Method Based on CP-ABE in NDN." Future Internet 12, no.1 (January16, 2020): 15. http://dx.doi.org/10.3390/fi12010015.

Full text

Abstract:

For the future of the Internet, because information-centric network (ICN) have natural advantages in terms of content distribution, mobility, and security, it is regarded as a potential solution, or even the key, to solve many current problems. Named Data Network (NDN) is one of the research projects initiated by the United States for network architecture. NDN is a more popular project than ICN. The information cache in the NDN separates content from content publishers, but content security is threatened because of the lack of security controls. Therefore, a multi-authority revocable access control method based on CP-ABE needs to be proposed. This method constructs a proxy-assisted access control scheme, which can implement effective data access control in NDN networks, and the scheme has high security. Because of the partial decryption on the NDN node, the decryption burden of the consumer client in the solution is reduced, and effective user and attribute revocation is achieved, and forward security and backward security are ensured, and collusion attacks are prevented. Finally, through the other security and performance analysis of the scheme of this paper, it proves that the scheme is safe and efficient.

APA, Harvard, Vancouver, ISO, and other styles

4

Chen, Yuhao, Alexander Wong, Yuan Fang, Yifan Wu, and Linlin Xu. "Deep Residual Transform for Multi-scale Image Decomposition." Journal of Computational Vision and Imaging Systems 6, no.1 (January15, 2021): 1–5. http://dx.doi.org/10.15353/jcvis.v6i1.3537.

Full text

Abstract:

Multi-scale image decomposition (MID) is a fundamental task in computer vision and image processing that involves the transformation of an image into a hierarchical representation comprising of different levels of visual granularity from coarse structures to fine details. A well-engineered MID disentangles the image signal into meaningful components which can be used in a variety of applications such as image denoising, image compression, and object classification. Traditional MID approaches such as wavelet transforms tackle the problem through carefully designed basis functions under rigid decomposition structure assumptions. However, as the information distribution varies from one type of image content to another, rigid decomposition assumptions lead to inefficiently representation, i.e., some scales can contain little to no information. To address this issue, we present Deep Residual Transform (DRT), a data-driven MID strategy where the input signal is transformed into a hierarchy of non-linear representations at different scales, with each representation being independently learned as the representational residual of previous scales at a user-controlled detail level. As such, the proposed DRT progressively disentangles scale information from the original signal by sequentially learning residual representations. The decomposition flexibility of this approach allows for highly tailored representations cater to specific types of image content, and results in greater representational efficiency and compactness. In this study, we realize the proposed transform by leveraging a hierarchy of sequentially trained autoencoders. To explore the efficacy of the proposed DRT, we leverage two datasets comprising of very different types of image content: 1) CelebFaces and 2) Cityscapes. Experimental results show that the proposed DRT achieved highly efficient information decomposition on both datasets amid their very different visual granularity characteristics.

APA, Harvard, Vancouver, ISO, and other styles

5

McCulloh,RussellJ., SarahD.Fouquet, Joshua Herigon, EricA.Biondi, Brandan Kennedy, Ellen Kerns, Adrienne DePorre, et al. "Development and implementation of a mobile device-based pediatric electronic decision support tool as part of a national practice standardization project." Journal of the American Medical Informatics Association 25, no.9 (June7, 2018): 1175–82. http://dx.doi.org/10.1093/jamia/ocy069.

Full text

Abstract:

Abstract Objective Implementing evidence-based practices requires a multi-faceted approach. Electronic clinical decision support (ECDS) tools may encourage evidence-based practice adoption. However, data regarding the role of mobile ECDS tools in pediatrics is scant. Our objective is to describe the development, distribution, and usage patterns of a smartphone-based ECDS tool within a national practice standardization project. Materials and Methods We developed a smartphone-based ECDS tool for use in the American Academy of Pediatrics, Value in Inpatient Pediatrics Network project entitled “Reducing Excessive Variation in the Infant Sepsis Evaluation (REVISE).” The mobile application (app), PedsGuide, was developed using evidence-based recommendations created by an interdisciplinary panel. App workflow and content were aligned with clinical benchmarks; app interface was adjusted after usability heuristic review. Usage patterns were measured using Google Analytics. Results Overall, 3805 users across the United States downloaded PedsGuide from December 1, 2016, to July 31, 2017, leading to 14 256 use sessions (average 3.75 sessions per user). Users engaged in 60 442 screen views, including 37 424 (61.8%) screen views that displayed content related to the REVISE clinical practice benchmarks, including hospital admission appropriateness (26.8%), length of hospitalization (14.6%), and diagnostic testing recommendations (17.0%). Median user touch depth was 5 [IQR 5]. Discussion We observed rapid dissemination and in-depth engagement with PedsGuide, demonstrating feasibility for using smartphone-based ECDS tools within national practice improvement projects. Conclusions ECDS tools may prove valuable in future national practice standardization initiatives. Work should next focus on developing robust analytics to determine ECDS tools’ impact on medical decision making, clinical practice, and health outcomes.

APA, Harvard, Vancouver, ISO, and other styles

6

Heginbottom,J.Alan. "Permafrost mapping: a review." Progress in Physical Geography: Earth and Environment 26, no.4 (December 2002): 623–42. http://dx.doi.org/10.1191/0309133302pp355ra.

Full text

Abstract:

Permafrost maps have developed over the last century from small line drawings showing the outer limits of the areas within which perennially frozen ground was known or supposed to exist, to large scale, multi-sheet, multi-faceted, complex earth-science documents. These show, in considerable detail, the estimated distribution of frozen ground, in terms of its spatial continuity, thickness, ground temperature and ground ice content. Other related geo-environmental information is commonly included along with the permafrost attributes. The key geocryological issues in permafrost mapping comprise definition, purpose, classification, data acquisition, and data storage and processing. The principal cartographic issues relate to map design, legend development and map production. The recent development of geographic information software (GIS) suitable for use on a desk-top computer allows the geocryologist to undertake many map compilation and production tasks directly. GIS software also allows the map compiler or map user to manipulate the data, layer by layer, and so create specialized maps for specific purposes. Computer storage and processing of permafrost data allows large volumes of data to be handled and, when combined with modelling techniques, allows these large volumes of data to be used in the compilation of maps. Integration of modelling techniques with GIS is a powerful tool for assessing the response of permafrost to a changing climate. Other research directions are noted.

APA, Harvard, Vancouver, ISO, and other styles

7

Barrios-Rubio, Andrés. "From the Antenna to the Display Devices: Transformation of the Colombian Radio Industry." Journalism and Media 2, no.2 (May11, 2021): 208–24. http://dx.doi.org/10.3390/journalmedia2020012.

Full text

Abstract:

Consolidation of the digital environment has become an irreversible global reality and, for the Colombian radio industry, it implies not only assuming a process of transformation in its actions, but, above all, continuous learning. Technological innovation imposes new forms of consumption whose logic corresponds to new systems for the production, distribution and commercialization of information, culture, science and entertainment. Object of study. Adaptation of the radio medium to the digital ecosystem of audiences invites us to focus the attention of researchers on the media’s use of web-radio, app–radio and social media; the relevance of sound semiotics compared to other components of the message on users’ screens; and the alterations suffered by the business model and productive routines of the radio. Methodology. This research took as its focus of study three Colombian radio stations and their informative stations—Caracol Radio, W Radio, Blu Radio, RCN Radio and La FM—through a mixed methodology. Quantitative instruments—numerical data to monitor activities on social platforms—and qualitative instruments—interpretation of messages and visual composition of the message—allow for the monitoring and analyzing of the performance of the radio medium in the digital environment, and the tactical approach of radio agents to delineate the strategies that promote the expansion, positioning and participation of radio in the Colombian media ecosystem. Results. Normalization of connectivity, ubiquity, timelessness and interactivity are, today, inherent values of the content broadcast by the radio industry, which needs to appropriate the tastes and interests of the audience through multi-device, multi-tasking and multi-user devices. Conclusion. Consumption actions of listeners: users are concentrated on the Smartphone screens, which provides a habit of listening and monitoring that forces the media to incorporate the format—and language—of video into their productive dynamics in order to attract and retain the attention of their audiences.

APA, Harvard, Vancouver, ISO, and other styles

8

Keppens, Arno, Jean-Christopher Lambert, José Granville, Daan Hubert, Tijl Verhoelst, Steven Compernolle, Barry Latter, et al. "Quality assessment of the Ozone_cci Climate Research Data Package (release 2017) – Part 2: Ground-based validation of nadir ozone profile data products." Atmospheric Measurement Techniques 11, no.6 (June27, 2018): 3769–800. http://dx.doi.org/10.5194/amt-11-3769-2018.

Full text

Abstract:

Abstract. Atmospheric ozone plays a key role in air quality and the radiation budget of the Earth, both directly and through its chemical influence on other trace gases. Assessments of the atmospheric ozone distribution and associated climate change therefore demand accurate vertically resolved ozone observations with both stratospheric and tropospheric sensitivity, on both global and regional scales, and both in the long term and at shorter timescales. Such observations have been acquired by two series of European nadir-viewing ozone profilers, namely the scattered-light UV–visible spectrometers of the GOME family, launched regularly since 1995 (GOME, SCIAMACHY, OMI, GOME-2A/B, TROPOMI, and the upcoming Sentinel-5 series), and the thermal infrared emission sounders of the IASI type, launched regularly since 2006 (IASI on Metop platforms and the upcoming IASI-NG on Metop-SG). In particular, several Level-2 retrieved, Level-3 monthly gridded, and Level-4 assimilated nadir ozone profile data products have been improved and harmonized in the context of the ozone project of the European Space Agency's Climate Change Initiative (ESA Ozone_cci). To verify their fitness for purpose, these ozone datasets must undergo a comprehensive quality assessment (QA), including (a) detailed identification of their geographical, vertical, and temporal domains of validity; (b) quantification of their potential bias, noise, and drift and their dependences on major influence quantities; and (c) assessment of the mutual consistency of data from different sounders. For this purpose we have applied to the Ozone_cci Climate Research Data Package (CRDP) released in 2017 the versatile QA and validation system Multi-TASTE, which has been developed in the context of several heritage projects (ESA's Multi-TASTE, EUMETSAT's O3M-SAF, and the European Commission's FP6 GEOmon and FP7 QA4ECV). This work, as the second in a series of four Ozone_cci validation papers, reports for the first time on data content studies, information content studies and ground-based validation for both the GOME- and IASI-type climate data records combined. The ground-based reference measurements have been provided by the Network for the Detection of Atmospheric Composition Change (NDACC), NASA's Southern Hemisphere Additional Ozonesonde programme (SHADOZ), and other ozonesonde and lidar stations contributing to the World Meteorological Organisation's Global Atmosphere Watch (WMO GAW). The nadir ozone profile CRDP quality assessment reveals that all nadir ozone profile products under study fulfil the GCOS user requirements in terms of observation frequency and horizontal and vertical resolution. Yet all L2 observations also show sensitivity outliers in the UTLS and are strongly correlated vertically due to substantial averaging kernel fluctuations that extend far beyond the kernel's 15 km FWHM. The CRDP typically does not comply with the GCOS user requirements in terms of total uncertainty and decadal drift, except for the UV–visible L4 dataset. The drift values of the L2 GOME and OMI, the L3 IASI, and the L4 assimilated products are found to be overall insignificant, however, and applying appropriate altitude-dependent bias and drift corrections make the data fit for climate and atmospheric composition monitoring and modelling purposes. Dependence of the Ozone_cci data quality on major influence quantities – resulting in data screening suggestions to users – and perspectives for the Copernicus Sentinel missions are additionally discussed.

APA, Harvard, Vancouver, ISO, and other styles

9

Mascetti, Luca, Maria Arsuaga Rios, Enrico Bocchi, Joao Calado Vicente, Belinda Chan Kwok Cheong, Diogo Castro, Julien Collet, et al. "CERN Disk Storage Services: Report from last data taking, evolution and future outlook towards Exabyte-scale storage." EPJ Web of Conferences 245 (2020): 04038. http://dx.doi.org/10.1051/epjconf/202024504038.

Full text

Abstract:

The CERN IT Storage group operates multiple distributed storage systems to support all CERN data storage requirements: the physics data generated by LHC and non-LHC experiments; object and file storage for infrastructure services; block storage for the CERN cloud system; filesystems for general use and specialized HPC clusters; content distribution filesystem for software distribution and condition databases; and sync&share cloud storage for end-user files. The total integrated capacity of these systems exceeds 0.6 Exabyte. Large-scale experiment data taking has been supported by EOS and CASTOR for the last 10+ years. Particular highlights for 2018 include the special HeavyIon run which was the last part of the LHC Run2 Programme: the IT storage systems sustained over 10GB/s to flawlessly collect and archive more than 13 PB of data in a single month. While the tape archival continues to be handled by CASTOR, the effort to migrate the current experiment workflows to the new CERN Tape Archive system (CTA) is underway. Ceph infrastructure has operated for more than 5 years to provide block storage to CERN IT private OpenStack cloud, a shared filesystem (CephFS) to HPC clusters and NFS storage to replace commercial Filers. S3 service was introduced in 2018, following increased user requirements for S3-compatible object storage from physics experiments and IT use-cases. Since its introduction in 2014N, CERNBox has become a ubiquitous cloud storage interface for all CERN user groups: physicists, engineers and administration. CERNBox provides easy access to multi-petabyte data stores from a multitude of mobile and desktop devices and all mainstream, modern operating systems (Linux, Windows, macOS, Android, iOS). CERNBox provides synchronized storage for end-user’s devices as well as easy sharing for individual users and e-groups. CERNBox has also become a storage platform to host online applications to process the data such as SWAN (Service for Web-based Analysis) as well as file editors such as Collabora Online, Only Office, Draw.IO and more. An increasing number of online applications in the Windows infrastructure uses CIFS/SMB access to CERNBox files. CVMFS provides software repositories for all experiments across the WLCG infrastructure and has recently been optimized to efficiently handle nightlybuilds. While AFS continues to provide general-purpose filesystem for internal CERN users, especially as $HOME login area on central computing infrastructure, the migration of project and web spaces has significantly advanced. In this paper, we report on the experiences from the last year of LHC RUN2 data taking and evolution of our services in the past year.. We will highlight upcoming changes and future improvements and challenges.

APA, Harvard, Vancouver, ISO, and other styles

10

Hall,MichaelJ., NeilE.Olson, and RogerD.Chamberlain. "Utilizing Virtualized Hardware Logic Computations to Benefit Multi-User Performance." Electronics 10, no.6 (March12, 2021): 665. http://dx.doi.org/10.3390/electronics10060665.

Full text

Abstract:

Recent trends in computer architecture have increased the role of dedicated hardware logic as an effective approach to computation. Virtualization of logic computations (i.e., by sharing a fixed function) provides a means to effectively utilize hardware resources by context switching the logic to support multiple data streams of computation. Multiple applications or users can take advantage of this by using the virtualized computation in an accelerator as a computational service, such as in a software as a service (SaaS) model over a network. In this paper, we analyze the performance of virtualized hardware logic and develop M/G/1 queueing model equations and simulation models to predict system performance. We predict system performance using the queueing model and tune a schedule for optimal performance. We observe that high variance and high load give high mean latency. The simulation models validate the queueing model, predict queue occupancy, show that a Poisson input process distribution (assumed in the queueing model) is reasonable for low load, and expand the set of scheduling algorithms considered.

APA, Harvard, Vancouver, ISO, and other styles

11

O'Neill,M.A., and C.C.Hilgetag. "The portable UNIX programming system (PUPS) and CANTOR: a computational environment for dynamical representation and analysis of complex neurobiological data." Philosophical Transactions of the Royal Society of London. Series B: Biological Sciences 356, no.1412 (August29, 2001): 1259–76. http://dx.doi.org/10.1098/rstb.2001.0912.

Full text

Abstract:

Many problems in analytical biology, such as the classification of organisms, the modelling of macromolecules, or the structural analysis of metabolic or neural networks, involve complex relational data. Here, we describe a software environment, the portable UNIX programming system (PUPS), which has been developed to allow efficient computational representation and analysis of such data. The system can also be used as a general development tool for database and classification applications. As the complexity of analytical biology problems may lead to computation times of several days or weeks even on powerful computer hardware, the PUPS environment gives support for persistent computations by providing mechanisms for dynamic interaction and homeostatic protection of processes. Biological objects and their interrelations are also represented in a homeostatic way in PUPS. Object relationships are maintained and updated by the objects themselves, thus providing a flexible, scalable and current data representation. Based on the PUPS environment, we have developed an optimization package, CANTOR, which can be applied to a wide range of relational data and which has been employed in different analyses of neuroanatomical connectivity. The CANTOR package makes use of the PUPS system features by modifying candidate arrangements of objects within the system's database. This restructuring is carried out via optimization algorithms that are based on user–defined cost functions, thus providing flexible and powerful tools for the structural analysis of the database content. The use of stochastic optimization also enables the CANTOR system to deal effectively with incomplete and inconsistent data. Prototypical forms of PUPS and CANTOR have been coded and used successfully in the analysis of anatomical and functional mammalian brain connectivity, involving complex and inconsistent experimental data. In addition, PUPS has been used for solving multivariate engineering optimization problems and to implement the digital identification system (DAISY), a system for the automated classification of biological objects. PUPS is implemented in ANSI–C under the POSIX.1 standard and is to a great extent architecture– and operating–system independent. The software is supported by systems libraries that allow multi–threading (the concurrent processing of several database operations), as well as the distribution of the dynamic data objects and library operations over clusters of computers. These attributes make the system easily scalable, and in principle allow the representation and analysis of arbitrarily large sets of relational data. PUPS and CANTOR are freely distributed (http://www.pups.org.uk) as open–source software under the GNU license agreement.

APA, Harvard, Vancouver, ISO, and other styles

12

Guo, Chen, Xingbing Fu, Yaojun Mao, Guohua Wu, fa*gen Li, and Ting Wu. "Multi-User Searchable Symmetric Encryption with Dynamic Updates for Cloud Computing." Information 9, no.10 (September28, 2018): 242. http://dx.doi.org/10.3390/info9100242.

Full text

Abstract:

With the advent of cloud computing, more and more users begin to outsource encrypted files to cloud servers to provide convenient access and obtain security guarantees. Searchable encryption (SE) allows a user to search the encrypted files without leaking information related to the contents of the files. Searchable symmetric encryption (SSE) is an important branch of SE. Most of the existing SSE schemes considered single-user settings, which cannot meet the requirements for data sharing. In this work, we propose a multi-user searchable symmetric encryption scheme with dynamic updates. This scheme is applicable to the usage scenario where one data owner encrypts sensitive files and shares them among multiple users, and it allows secure and efficient searches/updates. We use key distribution and re-encryption to achieve multi-user access while avoiding a series of issues caused by key sharing. Our scheme is constructed based on the index structure where a bit matrix is combined with two static hash tables, pseudorandom functions and hash functions. Our scheme is proven secure in the random oracle model.

APA, Harvard, Vancouver, ISO, and other styles

13

Liu, Xumin, and Chen Ding. "Learning Workflow Models from Event Logs Using Co-clustering." International Journal of Web Services Research 10, no.3 (July 2013): 42–59. http://dx.doi.org/10.4018/ijwsr.2013070103.

Full text

Abstract:

The authors propose a co-clustering approach to extract workflow models by analyzing event logs. The authors consider two major issues that are overlooked by most of the existing process mining approaches. First, a complex system typically runs multiple workflow models, all of which share the same log system. However, current approaches mainly focus on learning a single workflow model from event logs. Second, most systems support multi-users and each user is typically associated with (or use) certain number of operation sequences, which may follow one or more than one workflow models. Users can thus be leveraged as an important context when learning workflow models. However, this is not considered by current approaches. Therefore, the authors propose to learn User Behavior Pattern (UBP) that reflects the usage pattern of a user when accessing a business process system and exploit it to discover multiple workflow models from the event log of a complex system. The authors model a UBP as a probabilistic distribution on sequences, which allows computing the similarity between UBPs and sequences. The authors then co-cluster users and sequences to generate two types of clusters: user clusters that group users sharing similar UBP, and sequence clusters that group sequences that are the instances of the same workflow models. The workflow model can then be learned by analyzing its instances. The authors conducted a comprehensive experimental study to evaluate the effectiveness and efficiency of the proposed approach.

APA, Harvard, Vancouver, ISO, and other styles

14

Luo, Jingjing, Junjie Zhen, Peng Zhou, Wei Chen, and Yuzhu Guo. "An iPPG-Based Device for Pervasive Monitoring of Multi-Dimensional Cardiovascular Hemodynamics." Sensors 21, no.3 (January28, 2021): 872. http://dx.doi.org/10.3390/s21030872.

Full text

Abstract:

Hemodynamic activities, as an essential measure of physiological and psychological characteristics, can be used for cardiovascular and cerebrovascular disease detection. Photoplethysmography imaging (iPPG) can be applied for such purposes with non-contact advances, however, most cardiovascular hemodynamics of iPPG systems are developed for laboratory research, which limits the application in pervasive healthcare. In this study, a video-based facial iPPG detecting equipment was devised to provide multi-dimensional spatiotemporal hemodynamic pulsations for applications with high portability and self-monitoring requirements. A series of algorithms have also been developed for physiological indices such as heart rate and breath rate extraction, facial region analysis, and visualization of hemodynamic pulsation distribution. Results showed that the new device can provide a reliable measurement of a rich range of cardiovascular hemodynamics. Combined with the advanced computing techniques, the new non-contact iPPG system provides a promising solution for user-friendly pervasive healthcare.

APA, Harvard, Vancouver, ISO, and other styles

15

Virtanen, Juho-Pekka, Hannu Hyyppä, Matti Kurkela, MattiT.Vaaja, Tuulia Puustinen, Kaisa Jaalama, Arttu Julin, et al. "Browser based 3D for the built environment." Nordic Journal of Surveying and Real Estate Research 13, no.1 (December13, 2018): 54–76. http://dx.doi.org/10.30672/njsr.67846.

Full text

Abstract:

Digital 3D geometric models have become a central tool for geo-information. For many participatory and collaborative applications, distributing these models easily is essential. Several technical solutions exist for creating online systems that facilitate the study of 3D models in the context of the built environment. To provide an overview on browser based interactive 3D visualizations, we present a set of existing systems applied in Finland, and discuss their common properties and differences. To obtain first-hand experience, we experiment with an online 3D application development platform. The systems studied show a high potential for browser based 3D applications: interactive visualizations with multi-user characteristics and dynamic elements can be built by leveraging the 3D web technologies. Finally, we suggest a framework for discussing browser based 3D systems, covering the spectrum of possibilities available in modern web-based 3D for built environment applications.

APA, Harvard, Vancouver, ISO, and other styles

16

Coudoux, François-Xavier. "Extending Coverage of High Definition TV Services over ADSL2 with Optimized Reception Quality using H.264/AVC Transrating." Journal of Communications Software and Systems 8, no.3 (September21, 2012): 68. http://dx.doi.org/10.24138/jcomss.v8i3.168.

Full text

Abstract:

In this paper, we present a new Joint Source-Channel Coding (JSCC) architecture to extend the coverage of H.264/AVC High Definition (HD) video delivery over Digital Subscriber Line (DSL). The proposed solution combines low complexity H.264/AVC transrating as well as multi-carrier transmission and takes into account realistic ADSL2 specifications including all OSI layers. Both transrating and bit and power loading transmission parameters are automatically optimized in terms of end-user perceived quality, with respect to the characteristics of the given subscriber’s loop. Several originalities have been included: a new optimization algorithm has been developed, as well as a full rate-distortion modelling of the H.264/AVC transrater’s performances. Simulation results show that the proposed solution can extend the coverage area of HD video delivery up to more than one kilometre. It should allow the widespread distribution of HD video contents and increase the number of eligible subscribers.

APA, Harvard, Vancouver, ISO, and other styles

17

Yen, Steven, Melody Moh, and Teng-Sheng Moh. "Detecting Compromised Social Network Accounts Using Deep Learning for Behavior and Text Analyses." International Journal of Cloud Applications and Computing 11, no.2 (April 2021): 97–109. http://dx.doi.org/10.4018/ijcac.2021040106.

Full text

Abstract:

Social networks allow people to connect to one another. Over time, these accounts become an essential part of one's online identity. The account stores various personal data and contains one's network of acquaintances. Attackers seek to compromise user accounts for various malicious purposes, such as distributing spam, phishing, and much more. Timely detection of compromises becomes crucial for protecting users and social networks. This article proposes a novel system for detecting compromises of a social network account by considering both post behavior and textual content. A deep multi-layer perceptron-based autoencoder is leveraged to consolidate diverse features and extract underlying relationships. Experiments show that the proposed system outperforms previous techniques that considered only behavioral information. The authors believe that this work is well-timed, significant especially in the world that has been largely locked down by the COVID-19 pandemic and thus depends much more on reliable social networks to stay connected.

APA, Harvard, Vancouver, ISO, and other styles

18

Jaspe-Villanueva, Alberto, Moonisa Ahsan, Ruggero Pintus, Andrea Giachetti, Fabio Marton, and Enrico Gobbetti. "Web-based Exploration of Annotated Multi-Layered Relightable Image Models." Journal on Computing and Cultural Heritage 14, no.2 (June 2021): 1–29. http://dx.doi.org/10.1145/3430846.

Full text

Abstract:

We introduce a novel approach for exploring image-based shape and material models registered with structured descriptive information fused in multi-scale overlays. We represent the objects of interest as a series of registered layers of image-based shape and material data. These layers are represented at different scales and can come out of a variety of pipelines. These layers can include both Reflectance Transformation Imaging representations, and spatially varying normal and Bidirectional Reflectance Distribution Function fields, possibly as a result of fusing multi-spectral data. An overlay image pyramid associates visual annotations to the various scales. The overlay pyramid of each layer is created at data preparation time by either one of the three subsequent methods: (1) by importing it from other pipelines, (2) by creating it with the simple annotation drawing toolkit available within the viewer, and (3) with external image editing tools. This makes it easier for the user to seamlessly draw annotations over the region of interest. At runtime, clients can access an annotated multi-layered dataset by a standard web server. Users can explore these datasets on a variety of devices; they range from small mobile devices to large-scale displays used in museum installations. On all these aforementioned platforms, JavaScript/WebGL2 clients running in browsers are fully capable of performing layer selection, interactive relighting, enhanced visualization, and annotation display. We address the problem of clutter by embedding interactive lenses. This focus-and-context-aware (multiple-layer) exploration tool supports exploration of more than one representation in a single view. That allows mixing and matching of presentation modes and annotation display. The capabilities of our approach are demonstrated on a variety of cultural heritage use-cases. That involves different kinds of annotated surface and material models.

APA, Harvard, Vancouver, ISO, and other styles

19

Kumar, Puneet, and J.Srinivas. "Free vibration, bending and buckling of a FG-CNT reinforced composite beam." Multidiscipline Modeling in Materials and Structures 13, no.4 (November13, 2017): 590–611. http://dx.doi.org/10.1108/mmms-05-2017-0032.

Full text

Abstract:

Purpose The purpose of this paper is to perform a numerical analysis on the static and dynamic behaviors of beams made up of functionally graded carbon nanotube (FG-CNT) reinforced polymer and hybrid laminated composite containing the layers of carbon reinforced polymer with CNT. Conventional fibers have higher density as compared to carbon nanotubes (CNTs), thus insertion of FG-CNT reinforced polymer layer in fiber reinforced composite (FRC) structures makes them sustainable candidate for weight critical applications. Design/methodology/approach In this context, stress and strain formulations of a multi-layer composite system is determined with the help of Timoshenko hypothesis and then the principle of virtual work is employed to derive the governing equations of motion. Herein, extended rule of mixture and conventional micromechanics relations are used to evaluate the material properties of carbon nanotube reinforced composite (CNTRC) layer and FRC layer, respectively. A generalized eigenvalue problem is formulated using finite element approach and is solved for single layer FG-CNTRC beam and multi-layer laminated hybrid composite beam by a user-interactive MATLAB code. Findings First, the natural frequencies of FG-CNTRC beam are computed and compared with previously available results as well as with Ritz approximation outcomes. Further, free vibration, bending, and buckling analysis is carried out for FG-CNTRC beam to interpret the effect of different CNT volume fraction, number of walls in nanotube, distribution profiles, boundary conditions, and beam-slenderness ratios. Originality/value A free vibration analysis of hybrid laminated composite beam with two different layer stacking sequence is performed to present the advantages of hybrid laminated beam over the conventional FRC beam.

APA, Harvard, Vancouver, ISO, and other styles

20

Savitskaya,TatianaE. "Creating New Information and Communication Models in Working with Users within the Google Library Project." Observatory of Culture 17, no.3 (August6, 2020): 251–61. http://dx.doi.org/10.25281/2072-3156-2020-17-3-251-261.

Full text

Abstract:

The article analyzes characteristic features and development logic of the library project by Google, which started from the famous Google Book Search project (also known as Google Books and Google Print), and later as Google Play Books within the multi-platform multimedia service Google Play. For decades, a constant trend of the corporation’s activity has been the development and testing of new social and communication models in working with users, combining the complexity and global reach of the audience with the use of its interactive potential. In the early 2000s, the company initiated mass scanning of library collections, thus starting the development of a new institutional paradigm for electronic libraries. Later, having developed new business models for the distribution of electronic copies of printed products in the course of numerous legal proceedings on charges of copyright infringement, it also pioneered the new information market development.From the very beginning, Google Book Search was aimed at the mass user, which was facilitated by its constantly expanding set of options and increasing level of comfort of access to the resource. In 2006—2010, the service presented an opportunity for users to download, in pdf format, books free of copyright law; a new viewing interface “About this book” was added; an opportunity to operate texts using “My library” option was provided; a mobile version of the service was launched; access to statistical information on diachronic frequency dynamics of word usage based on the collected database was provided.The article analyzes the further development of the library project within the Google Play Books service, which allows users to read, buy and sell e-books, use bookmarks, download their own books in pdf and EPUB formats, and synchronize data on all user’s devices. There is assessed the social significance of the project in the context of the global electronic civilization development.

APA, Harvard, Vancouver, ISO, and other styles

21

Barrufet,L., C.Pearson, S.Serjeant, K.Małek, I.Baronchelli, M.C.Campos-Varillas, G.J.White, et al. "A high redshift population of galaxies at the North Ecliptic Pole." Astronomy & Astrophysics 641 (September 2020): A129. http://dx.doi.org/10.1051/0004-6361/202037838.

Full text

Abstract:

Context. Dusty high-z galaxies are extreme objects with high star formation rates (SFRs) and luminosities. Characterising the properties of this population and analysing their evolution over cosmic time is key to understanding galaxy evolution in the early Universe. Aims. We select a sample of high-z dusty star-forming galaxies (DSFGs) and evaluate their position on the main sequence (MS) of star-forming galaxies, the well-known correlation between stellar mass and SFR. We aim to understand the causes of their high star formation and quantify the percentage of DSFGs that lie above the MS. Methods. We adopted a multi-wavelength approach with data from optical to submillimetre wavelengths from surveys at the North Ecliptic Pole to study a submillimetre sample of high-redshift galaxies. Two submillimetre selection methods were used, including: sources selected at 850 μm with the Sub-millimetre Common-User Bolometer Array 2) SCUBA-2 instrument and Herschel-Spectral and Photometric Imaging Receiver (SPIRE) selected sources (colour-colour diagrams and 500 μm risers), finding that 185 have good multi-wavelength coverage. The resulting sample of 185 high-z candidates was further studied by spectral energy distribution fitting with the CIGALE fitting code. We derived photometric redshifts, stellar masses, SFRs, and additional physical parameters, such as the infrared luminosity and active galactic nuclei (AGN) contribution. Results. We find that the Herschel-SPIRE selected DSFGs generally have higher redshifts (z = 2.57−0.09+0.08) than sources that are selected solely by the SCUBA-2 method (z = 1.45−0.06+0.21). We find moderate SFRs (797−50+108 M⊙ yr−1), which are typically lower than those found in other studies. We find that the different results in the literature are, only in part, due to selection effects, as even in the most extreme cases, SFRs are still lower than a few thousand solar masses per year. The difference in measured SFRs affects the position of DSFGs on the MS of galaxies; most of the DSFGs lie on the MS (60%). Finally, we find that the star formation efficiency (SFE) depends on the epoch and intensity of the star formation burst in the galaxy; the later the burst, the more intense the star formation. We discuss whether the higher SFEs in DSFGs could be due to mergers.

APA, Harvard, Vancouver, ISO, and other styles

22

Graham,E.D. "DRIVERS FOR THE DEVELOPMENT OF AN INTEGRATED SUPPLY BASE AND OFFSHORE SUPPLY CHAIN, NORTH WEST SHELF, AUSTRALIA." APPEA Journal 44, no.1 (2004): 593. http://dx.doi.org/10.1071/aj03027.

Full text

Abstract:

Since the commencement of the major developments on the North West Shelf, the offshore resource industry, during both its construction and operational phases, has faced considerable logistical impediments to cost-effective solutions for the offshore supply chain. These impediments have included distance, scant resources, lack of infrastructure both on and offshore and lack of critical mass.Throughout the world, offshore projects have greatly benefitted from the availability of integrated services to cater for the transport of equipment from the point of manufacture or distribution to the offshore location.Within the Australian context the privately controlled Esso Barry Beach and Dampier Woodside facilities are examples of integrated services, but both differ considerably from a public multi-user facility. The model used in the Timor Sea of one vessel or vessels for the use of several operators is another example.The NorthWest Shelf has now reached the critical mass and it became apparent several years ago that the area needed an integrated supply base available to multiple operators. It would need to include a heavy loadout wharf, laydown areas, slipway and engineering facilities and office space to service forthcoming projects, as well as planning and cooperation amongst all players to maximise efficiency and use of scant resources as drivers for economic benefits to offshore operators in the region.Furthermore the fallout from the events of 11 September 2001 and continuing threats of terrorism has meant the security of marine assets has become an important part of each operator’s everyday life. The introduction of new legislation relating to this security issue is planned for mid 2004.In 2000 and 2001 Mermaid Marine Australia Limited undertook a major expansion of its Dampier supply base, and established a world-class facility to meet the growing demands of the region.This complex has for the first time provided the northwest of Australia, particularly the North West Shelf, Carnarvon Basin and the onshore developments on the Burrup Peninsula, with a facility for offloading and loadout of heavy shipments and fabrication and slipway facilities, coupled with the advantages of a large supply base. This facility can also be expanded to meet growth and the emerging requirements related to security.This paper describes the drivers for change commencing with the earliest supply chains and following through to the integrated service now availabe. These drivers meet the requirements of the offshore operators in the region as well as showing the benefits anticipated from this integrated service. The paper also outlines in detail the requirements of the International Maritime Organisation for worldwide changes to port and offshore security.

APA, Harvard, Vancouver, ISO, and other styles

23

KKA,Abdullah, RobertABC, and AdeyemoAB. "August 2016 VOLUME 5, ISSUE 8, AUGUST 2016 5th Generation Wi-Fi Shatha Ghazal, Raina S Alkhlailah Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5801 ECG Arrhythmia Detection Using Choi-Williams Time-Frequency Distribution and Artificial Neural Network Sanjit K. Dash, G. Sasibhushana Rao Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5802 Data Security using RSA Algorithm in Cloud Computing Santosh Kumar Singh, Dr. P.K. Manjhi, Dr. R.K. Tiwari Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5803 Detection Algorithms in Medical Imaging Priyanka Pareek, Pankaj Dalal Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5804 A Review Study on the CPU Scheduling Algorithms Shweta Jain, Dr. Saurabh Jain Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5805 Healthcare Biosensors - A Paradigm Shift To Wireless Technology Taha Mukhtar Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5806 Congestion Control for Peer to Peer Application using Random Early Detection Algorithm Sonam Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5807 Quantitative and Qualitative Analysis of Milk Parameters using Arduino Controller Y.R. Bhamare, M.B. Matsagar, C.G. Dighavkar Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5808 Ardunio Based Security and Safety using GSM as Fault Alert System for BTS (Base Transceiver Station) Umeshwari Khot, Prof. Venkat N. Ghodke Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5809 Automatic Single and Multi Topic Summarization and Evolution to Generate Timeline Mrs. V. Meenakshi, Ms. S. Jeyanthi Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5810 Data Hiding in Encrypted HEVC/AVC Video Streams Saltanat Shaikh, Prof. Shahzia Sayyad Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5811 A Study of Imbalanced Classification Problem P. Rajeshwari, D. Maheshwari Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5812 Design of PTL based Area Efficient and Low Power 4-bit ALU Saraabu Narendra Achari, Mr. C. Pakkiraiah Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5813 The Design of Driver Safety Awareness and Assistance System through Sleep Activated and Auto Brake System for Vehicle Control D. Sivabalaselvamani, Dr. A. Tamilarasi, L. Rahunathan and A.S. Harishankher Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5814 Parameters Selection, Applications & Convergence Analysis of PSO Algorithms Sachin Kumar, Mr. N.K. Gupta Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5815 Effective Pattern Deploying Model for the Document Restructuring and Classification Niketa, Jharna Chopra Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5816 Cataloging Telugu Sentences by Hidden Morkov Techniques V. Suresh Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5817 Biometrics for Cell Phone Safety Jyoti Tiwari, Santosh Kumar Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5818 Digital Image Watermarking using Modified DWT&DCT Combination and Bi Linear Interpolation Yannam .Nagarjuna, K. Chaitanya Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5819 Comparative Study and Analysis on the Techniques of Web Mining Dipika Sahu, Yamini Chouhan Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5820 A Review of MIL-STD-1553 Bus Trends and Future K. Padmanabham, Prabhakar Kanugo, Dr. K. Nagabhushan Raju, M. Chandrashekar Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5821 Design of QPSK Digital Modulation Scheme Using Turbo Codes for an Air Borne System D. Sai Brunda, B. Geetha Rani Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5822 An Efficient Locally Weighted Spectral Cluster for Automatic Image Segmentation Vishnu Priya M, J Santhosh Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5823 An Efficient Sliding Window Based Micro Cluster Over Data Streams Nancy Mary, A. Venugopal Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5824 Comparative Analysis of Traditional Frequency Reuse Techniques in LTE Network Neelam Rani, Dr. Sanjeev Kumar Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5825 Score Level Integration of Fingerprint and Hand Geometry Biometrics Jyoti Tiwari, Santosh Kumar Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5826 CHARM: Intelligently Cost and Bandwidth Detection for FTP Servers using Heuristic Algorithm Shiva Urolagin Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5827 Image Enhancement Using Modified Exposure Based Histogram SK. Nasreen, N. Anupama Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5828 Human Gesture Based Recognition and Classification Using MATLAB Suman, Er. Kapil Sirohi Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5829 Image Denoising- A Novel Approach Dipali D. Sathe, Prof. K.N. Barbole Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5830 Design of Low Pass Digital FIR Filter Using Nature Inspired Technique Nisha Rani, Balraj Singh, Darshan Singh Sidhu Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5831 Issues and Challenges in Software Quality Assurance Himangi, Surender singh Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5832 Hybridization of GSA and AFSA to Detect Black Hole Attack in Wireless Sensor Network Soni Rani, Charanjit Singh Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5833 Reversible Watermarking Technique for Data Hiding, Accurate Tamper Detection in ROI and Exact Recovery of ROI Y. Usha Madhuri, K. Chaitanya Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5834 Fault Tolerance and Concurrency Control in Heterogeneous Distributed Database Systems Sagar Patel, Meghna Burli, Nidhi Shah, Prof. (Mrs.) Vinaya Sawant Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5835 Collection of Offline Tamil Handwriting Samples and Database Creation D. Rajalakshmi, Dr. S.K. Jayanthi Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5836 Overview of Renewable Energy in Maharashtra Mr. Sagar P. Thombare, Mr. Vishal Gunjal, Miss. Snehal Bhandarkar Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5837 Comparative Analysis of Efficient Image Steganographic Technique with the 2-bit LSB Algorithm for Color Images K. S. Sadasiva Rao, Dr A. Damodaram Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5838 An Efficient Reverse Converter Design for Five Moduli Set RNS Y. Ayyavaru Reddy, B. Sekhar Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5839 VLSI Design of Area Efficient High Performance SPMV Accelerator using VBW-CBQCSR Scheme N. Narasimharao, A. Mallaiah Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5840 Customer Retention of MCDR using 3SCDM Approaches Suban Ravichandran, Chandrasekaran Ramasamy Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5841 User Privacy and Data Trustworthiness in Mobile Crowd Sensing Ms. T. Sharadha, Dr. R. Vijaya Bhanu Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5842 A Safe Anti-Conspiracy Data Model For Changing Groups in Cloud G. Ajay Kumar, Devaraj Verma C Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5843 Scope and Adoption of M-Commerce in India Anurag Mishra, Sanjay Medhavi, Khan Shah Mohd, P.C. Mishra Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5844 A Secure Data Hiding Scheme For Color Image Mrs. S.A. Bhavani Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5845 A Study of Different Content Based Image Retrieval Techniques C. Gururaj, D. Jayadevappa, Satish Tunga Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5846 Cache Management for Big Data Applications: Survey Kiran Grover, Surender Singh Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5847 Survey on Energy Efficient Protocols and Challenges in IOT Syeda Butool Fatima, Sayyada Fahmeeda Sultana, Sadiya Ansari Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5848 Educational Data Mining For Evaluating Students Performance Sampreethi P.K, VR. Nagarajan Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5849 Iterative Pareto Principle for Software Test Case Prioritization Manas Kumar Yogi, G. Vijay Kumar, D. Uma Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5850 Localization Techniques in Wireless Sensor Networks: A Review Abhishek Kumar, Deepak Prashar Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5851 Ensemble Averaging Filter for Noise Reduction Tom Thomas Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5852 Survey Paper on Get My Route Application Shubham A. Purohit, Tushar R. Khandare, Prof. Swapnil V. Deshmukh Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5853 Design and Implementation of Smart Car with Self-Navigation and Self-Parking Systems using Sensors and RFID Technology Madhuri M. Bijamwar, Prof. S.G. Kole, Prof. S.S. Savkare Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5854 Comparison Study of Induction Motor Drives using Microcontroller and FPGA Sooraj M S, Sreerag K T V Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5855 A Survey on Text Categorization Senthil Kumar B, Bhavitha Varma E Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5856 Multirate Signal Reconstruction Using Two Channel Orthogonal Filter Bank Sijo Thomas, Darsana P Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5857 The Multi-keyword Synonym Search for Encrypted Cloud Data Using Clustering Method Monika Rani H G, Varshini Vidyadhar Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5858 A Review on Various Speech Enhancement Techniques Alugonda Rajani, Soundarya .S.V.S Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5859 A Survey on Various Spoofing Attacks and Image Fusion Techniques Pravallika .P, Dr. K. Satya Prasad Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5860 Non-Invasive Vein Detection using Infra-red Rays Aradhana Singh, Dr. S.C. Prasanna Kumar, Dr. B.G. Sudershan Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5861 Boundary-Polygons for Minutiae based Fingerprinst Recognition Kusha Maharshi, Prashant Sahai Saxena Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5862 Image Forgery Detection on Digital Images Nimi Susan Saji, Ranjitha Rajan Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5863 Enhancing Information Security in Big Data Renu Kesharwani Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5864 Secure Multi-Owner Data Sharing for Dynamic Groups in Cloud Ms. Nilophar M. Masuldar, Prof. V. P. Kshirsagar Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5865 Compact Microstrip Octagonal Slot Antenna for Wireless Communication Applications Thasneem .H, Midhun Joy Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5866 ‘Aquarius’- Smart IOT Technology for Water Level Monitoring System Prof. A. M. Jagtap, Bhaldar Saniya Sikandar, Shinde Sharmila Shivaji, Khalate Vrushali Pramod, Nirmal Kalyani Sarangdhar Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5867 Future of Results in Select Search Engine Peerzada Mohammad Iqbal, Dr. Abdul Majid Baba, Aasim Bashir Abstract | PDF with Text | DOI: 10.17148/IJARCCE.2016.5868 Semantic Indexing Techniques on Information Retrieval of Web Content." IJARCCE 5, no.8 (August30, 2016): 347–52. http://dx.doi.org/10.17148/ijarcce.2016.5869.

Full text

APA, Harvard, Vancouver, ISO, and other styles

24

"Socially Aware Device-to-multi-device User Grouping for Popular Content Distribution." KSII Transactions on Internet and Information Systems 14, no.11 (November30, 2020). http://dx.doi.org/10.3837/tiis.2020.11.007.

Full text

APA, Harvard, Vancouver, ISO, and other styles

25

Arvanitaki, Antonia, Nikolaos Pappas, Niklas Carlsson, Parthajit Mohapatra, and Oleg Burdakov. "Performance analysis of congestion-aware secure broadcast channels." EURASIP Journal on Wireless Communications and Networking 2021, no.1 (September25, 2021). http://dx.doi.org/10.1186/s13638-021-02046-7.

Full text

Abstract:

AbstractCongestion-aware scheduling in case of downlink cellular communication has ignored the distribution of diverse content to different clients with heterogeneous secrecy requirements. Other possible application areas that encounter the preceding issue are secure offloading in mobile-edge computing, and vehicular communication. In this paper, we extend the work in Arvanitaki et al. (SN Comput Sci 1(1):53, 2019) by taking into consideration congestion and random access. Specifically, we study a two-user congestion-aware broadcast channel with heterogeneous traffic and different security requirements. We consider two randomized policies for selecting which packets to transmit, one is congestion-aware by taking into consideration the queue size, whereas the other one is congestion-agnostic. We analyse the throughput and the delay performance under two decoding schemes at the receivers, and provide insights into their relative security performance and into how congestion control at the queue holding confidential information can help decrease the average delay per packet. We show that the congestion-aware policy provides better delay, throughput, and secrecy performance for large arrival packet probabilities at the queue holding the confidential information. The derived results also take account of the self-interference caused at the receiver for whom confidential data is intended due to its full-duplex operation while jamming the communication at the other user. Finally, for two decoding schemes, we formulate our problems in terms of multi-objective optimization, which allows for finding a trade-off between the average packet delay for packets intended for the legitimate user and the throughput for the other user under congestion-aware policy.

APA, Harvard, Vancouver, ISO, and other styles

26

Lee,EvaK., and Karan Uppal. "CERC: an interactive content extraction, recognition, and construction tool for clinical and biomedical text." BMC Medical Informatics and Decision Making 20, S14 (December 2020). http://dx.doi.org/10.1186/s12911-020-01330-8.

Full text

Abstract:

Abstract Background Automated summarization of scientific literature and patient records is essential for enhancing clinical decision-making and facilitating precision medicine. Most existing summarization methods are based on single indicators of relevance, offer limited capabilities for information visualization, and do not account for user specific interests. In this work, we develop an interactive content extraction, recognition, and construction system (CERC) that combines machine learning and visualization techniques with domain knowledge for highlighting and extracting salient information from clinical and biomedical text. Methods A novel sentence-ranking framework multi indicator text summarization, MINTS, is developed for extractive summarization. MINTS uses random forests and multiple indicators of importance for relevance evaluation and ranking of sentences. Indicative summarization is performed using weighted term frequency-inverse document frequency scores of over-represented domain-specific terms. A controlled vocabulary dictionary generated using MeSH, SNOMED-CT, and PubTator is used for determining relevant terms. 35 full-text CRAFT articles were used as the training set. The performance of the MINTS algorithm is evaluated on a test set consisting of the remaining 32 full-text CRAFT articles and 30 clinical case reports using the ROUGE toolkit. Results The random forests model classified sentences as “good” or “bad” with 87.5% accuracy on the test set. Summarization results from the MINTS algorithm achieved higher ROUGE-1, ROUGE-2, and ROUGE-SU4 scores when compared to methods based on single indicators such as term frequency distribution, position, eigenvector centrality (LexRank), and random selection, p < 0.01. The automatic language translator and the customizable information extraction and pre-processing pipeline for EHR demonstrate that CERC can readily be incorporated within clinical decision support systems to improve quality of care and assist in data-driven and evidence-based informed decision making for direct patient care. Conclusions We have developed a web-based summarization and visualization tool, CERC (https://newton.isye.gatech.edu/CERC1/), for extracting salient information from clinical and biomedical text. The system ranks sentences by relevance and includes features that can facilitate early detection of medical risks in a clinical setting. The interactive interface allows users to filter content and edit/save summaries. The evaluation results on two test corpuses show that the newly developed MINTS algorithm outperforms methods based on single characteristics of importance.

APA, Harvard, Vancouver, ISO, and other styles

27

Jiang, Liang, Lu Liu, Jingjing Yao, and Leilei Shi. "A hybrid recommendation model in social media based on deep emotion analysis and multi-source view fusion." Journal of Cloud Computing 9, no.1 (October7, 2020). http://dx.doi.org/10.1186/s13677-020-00199-2.

Full text

Abstract:

Abstract The recommendation system is an effective means to solve the information overload problem that exists in social networks, which is also one of the most common applications of big data technology. Thus, the matrix decomposition recommendation model based on scoring data has been extensively studied and applied in recent years, but the data sparsity problem affects the recommendation quality of the model. To this end, this paper proposes a hybrid recommendation model based on deep emotion analysis and multi-source view fusion which makes a personalized recommendation with user-post interaction ratings, implicit feedback and auxiliary information in a hybrid recommendation system. Specifically, the HITS algorithm is used to process the data set, which can filter out the users and posts with high influence and eliminate most of the low-quality users and posts. Secondly, the calculation method of measuring the similarity of candidate posts and the method of calculating K nearest neighbors are designed, which solves the problem that the text description information of post content in the recommendation system is difficult to mine and utilize. Then, the cooperative training strategy is used to achieve the fusion of two recommended views, which eliminates the data distribution deviation added to the training data pool in the iterative training. Finally, the performance of the DMHR algorithm proposed in this paper is compared with other state-of-art algorithms based on the Twitter dataset. The experimental results show that the DMHR algorithm has significant improvements in score prediction and recommendation performance.

APA, Harvard, Vancouver, ISO, and other styles

28

Saffariha, Maryam, Ali Jahani, and Daniel Potter. "Seed germination prediction of Salvia limbata under ecological stresses in protected areas: an artificial intelligence modeling approach." BMC Ecology 20, no.1 (August29, 2020). http://dx.doi.org/10.1186/s12898-020-00316-4.

Full text

Abstract:

Abstract Background Salvia is a large, diverse, and polymorphous genus of the family Lamiaceae, comprising about 900 ornamentals, medicinal species with almost cosmopolitan distribution in the world. The success of Salvia limbata seed germination depends on a numerous ecological factors and stresses. We aimed to analyze Salvia limbata seed germination under four ecological stresses of salinity, drought, temperature and pH, with application of artificial intelligence modeling techniques such as MLR (Multiple Linear Regression), and MLP (Multi-Layer Perceptron). The S.limbata seeds germination was tested in different combinations of abiotic conditions. Five different temperatures of 10, 15, 20, 25 and 30 °C, seven drought treatments of 0, −2, −4, −6, −8, −10 and −12 bars, eight treatments of salinity containing 0, 50, 100.150, 200, 250, 300 and 350 mM of NaCl, and six pH treatments of 4, 5, 6, 7, 8 and 9 were tested. Indeed 228 combinations were tested to determine the percentage of germination for model development. Results Comparing to the MLR, the MLP model represents the significant value of R2 in training (0.95), validation (0.92) and test data sets (0.93). According to the results of sensitivity analysis, the values of drought, salinity, pH and temperature are respectively known as the most significant variables influencing S. limbata seed germination. Areas with high moisture content and low salinity in the soil have a high potential to seed germination of S. limbata. Also, the temperature of 18.3 °C and pH of 7.7 are proposed for achieving the maximum number of germinated S. limbata seeds. Conclusions Multilayer perceptron model helps managers to determine the success of S.limbata seed planting in agricultural or natural ecosystems. The designed graphical user interface is an environmental decision support system tool for agriculture or rangeland managers to predict the success of S.limbata seed germination (percentage) in different ecological constraints of lands.

APA, Harvard, Vancouver, ISO, and other styles

29

Bao, Lin, Xiaoyan Sun, Dunwei Gong, and Yong Zhang. "Multi-source Heterogeneous User Generated Contents-driven Interactive Estimation of Distribution Algorithms for Personalized Search." IEEE Transactions on Evolutionary Computation, 2021, 1. http://dx.doi.org/10.1109/tevc.2021.3109576.

Full text

APA, Harvard, Vancouver, ISO, and other styles

30

Xiem, Hoang Van, Duong Thi Hang, Trinh Anh Vu, and Vu Xuan Thang. "Cooperative Caching in Two-Layer Hierarchical Cache-aided Systems." VNU Journal of Science: Computer Science and Communication Engineering 35, no.1 (May16, 2019). http://dx.doi.org/10.25073/2588-1086/vnucsce.222.

Full text

Abstract:

Caching has received much attention as a promising technique to overcome high data rate and stringent latency requirements in the future wireless networks. The premise of caching technique is to prefetch most popular contents closer to end users in local cache of edge nodes, e.g., base station (BS). When a user requests a content that is available in the cache, it can be served directly without being sent from the core network. In this paper, we investigate the performance of hierarchical caching systems, in which both BS and end users are equipped with a storage memory. In particular, we propose a novel cooperative caching scheme that jointly optimizes the content placement at the BS’s and users’ caches. The proposed caching scheme is analytically shown to achieve a larger global caching gain than the reference in both uncoded – and coded caching strategies. Finally, numerical results are presented to demonstrate the effectiveness of our proposed caching algorithm. Keywords Hierarchical caching system, cooperative caching, caching gain, uncoded caching, coded caching References [1] D. Liu, B. Chen, C. Yang, A.F. Molisch, Caching at the Wireless Edge: Design Aspects, Challenges, and Future Directions, IEEE Communications Magazine 54 (2016) 22-28. https://doi.org/10.1109/MCOM.2016.7565183.[2] T.X. Vu, S. Chatzinotas, B. Ottersten, Edge-Caching Wireless Networks: Performance Analysis and Optimization, IEEE Transactions on Wireless Communications 17 (2018) 2827-2839. https://doi.org/10.1109/TWC.2018.2803816.[3] M.A. Maddah-Ali, U. Niesen, Fundamental Limits of Caching, IEEE Transactions on Information Theory 60 (2014) 2856-2867. https://doi.org/10.1109/TIT.2014.2306938.[4] M.A. Maddah-Ali, U. Niesen, Decentralized Coded Caching Attains Order-Optimal Memory-Rate Tradeoff, IEEE/ACM Transactions on Networking 23 (2015) 1029-1040. https://doi.org/10.1109/TNET.2014.2317316.[5] U. Niesen, M.A. Maddah-Ali, Coded Caching with Nonuniform Demands, IEEE Transactions on Information Theory 63 (2017) 1146-1158. https://doi.org/10.1109/TIT.2016.2639522.[6] Q. Yu, M.A. Maddah-Ali, A.S. Avestimehr, The exact rate-memory tradeoff for caching with uncoded prefetching, IEEE Transactions on Information Theory 64 (2018) 1281-1296. https://doi.org/10.1109/TIT.2017.2785237.[7] S.P. Shariatpanahi, H. Shah-Mansouri, B.H. Khalaj, Caching gain in interference-limited wireless networks, IET Communications 9 (2015) 1269-1277. https://doi.org/10.1049/iet-com.2014.0955.[8] N. Naderializadeh, M.A. Maddah-Ali, A.S. Avestimehr, Fundamental limits of cache-aided interference management, IEEE Transactions on Information Theory 63 (2017) 3092-3107. https://doi.org/10.1109/TIT.2017.2669942.[9] J. Hachem, U. Niesen, S. Diggavi, Energy-Efficiency Gains of Caching for Interference Channels, IEEE Communications Letters 22 (2018) 1434-1437. https://doi.org/10.1109/LCOMM.2018.2822694.[10] M.A. Maddah-Ali, U. Niesen, Cache-aided interference channels, IEEE International Symposium on Information Theory ISIT, 2015, pp. 809-813. https://doi.org/10.1109/ISIT.2015.7282567.[11] T.X. Vu, S. Chatzinotas, B. Ottersten, T.Q. Duong, Energy minimization for cache-assisted content delivery networks with wireless backhaul, IEEE Wireless Communications Letters 7 (2018) 332-335. https://doi.org/10.1109/LWC.2017.2776924.[12] S. Li, Q. Yu, M.A. Maddah-Ali, A.S. Avestimehr, Coded distributed computing: Fundamental limits and practical challenges, 50th Asilomar Conference on Signals, Systems and Computers (2016) 509-513. https://doi.org/ 10.1109/ACSSC.2016.7869092.[13] S. Li, M.A. Maddah-Ali, Q. Yu, A.S. Avestimehr, A fundamental tradeoff between computation and communication in distributed computing, IEEE Transactions on Information Theory 64 (2018) 109-128. https://doi.org/10.1109/TIT.2017.2756959.[14] S. Borst, V. Gupta, A. Walid, Distributed caching algorithms for content distribution networks, Proceedings IEEE INFOCOM. (2010) 1-9. https://doi.org/10.1109/INFCOM.2010.5461964.[15] N. Karamchandani, U. Niesen, M.A. Maddah-Ali, SN Diggavi, Hierarchical coded caching, IEEE Transactions on Information Theory 62 (2016) 3212-3229. https://doi.org/10.1109/TIT.2016.2557804.[16] S.P. Shariatpanahi, G. Caire, B. H. Khalaj, Multi-antenna coded caching, IEEE International Symposium on Information Theory ISIT, 2017, pp. 2113-2117. https://doi.org/10.1109/ISIT.2017.8006902.[17] R. Pedarsani, M.A. Maddah-Ali, U. Niesen, Online coded caching, IEEE/ACM Transactions on Networking 24 (2016) 836-845. https://doi.org/10.1109/TNET.2015.2394482.

APA, Harvard, Vancouver, ISO, and other styles

31

Schedl, Markus, Christine Bauer, Wolfgang Reisinger, Dominik Kowald, and Elisabeth Lex. "Listener Modeling and Context-Aware Music Recommendation Based on Country Archetypes." Frontiers in Artificial Intelligence 3 (February2, 2021). http://dx.doi.org/10.3389/frai.2020.508725.

Full text

Abstract:

Music preferences are strongly shaped by the cultural and socio-economic background of the listener, which is reflected, to a considerable extent, in country-specific music listening profiles. Previous work has already identified several country-specific differences in the popularity distribution of music artists listened to. In particular, what constitutes the “music mainstream” strongly varies between countries. To complement and extend these results, the article at hand delivers the following major contributions: First, using state-of-the-art unsupervized learning techniques, we identify and thoroughly investigate (1) country profiles of music preferences on the fine-grained level of music tracks (in contrast to earlier work that relied on music preferences on the artist level) and (2) country archetypes that subsume countries sharing similar patterns of listening preferences. Second, we formulate four user models that leverage the user’s country information on music preferences. Among others, we propose a user modeling approach to describe a music listener as a vector of similarities over the identified country clusters or archetypes. Third, we propose a context-aware music recommendation system that leverages implicit user feedback, where context is defined via the four user models. More precisely, it is a multi-layer generative model based on a variational autoencoder, in which contextual features can influence recommendations through a gating mechanism. Fourth, we thoroughly evaluate the proposed recommendation system and user models on a real-world corpus of more than one billion listening records of users around the world (out of which we use 369 million in our experiments) and show its merits vis-à-vis state-of-the-art algorithms that do not exploit this type of context information.

APA, Harvard, Vancouver, ISO, and other styles

32

Andringa, Peter, David Duquette, Deborah Dwyer, Philip Napoli, and Petra Ronald. "HOW IS SOCIAL MEDIA GATEKEEPING DIFFERENT? A MULTI- PLATFORM COMPARATIVE ANALYSIS OF THE NEW YORK TIMES." AoIR Selected Papers of Internet Research, February2, 2020. http://dx.doi.org/10.5210/spir.v2018i0.10472.

Full text

Abstract:

News audiences are increasingly fragmented across different media platforms. Consequently, individual news organizations simultaneously disseminate their content across different media. Each of these media has different user bases, interface characteristics, and distribution systems. Given these substantial differences, the dynamics of the gatekeeping process – and the news values that guide this process – vary across different media technologies/platforms. As audience attention migrates from older to newer platforms (such as social media), it is increasingly important that we understand how the nature of the news that is disseminated – and thus consumed – may be different from the news disseminated through more traditional means. The ramifications of these differences can be profound if the news disseminated on the newer platforms is, for example, more or less substantive, more or less diverse, or more or less plentiful than the news disseminated on older technologies/platforms. This study addresses these issues through a comparative gatekeeping analysis of the New York Times. For this study, a month’s worth of New York Times front page, home page, and Facebook page story output are comparatively analyzed across dimensions such as story quantity, story duplication, hard versus soft news, and content diversity. The primary goal is to determine if or how the nature of the news that is prioritized for news consumers differs between the social media context and older contexts such as the print front page and the web home page.

APA, Harvard, Vancouver, ISO, and other styles

33

Lapoule, Paul, and Enrico Bruno Colla. "The multi-channel impact on the sales forces management." International Journal of Retail & Distribution Management 44, no.3 (March14, 2016). http://dx.doi.org/10.1108/ijrdm-11-2014-0159.

Full text

Abstract:

Purpose The primary objective of this qualitative research is to gain a deeper understanding of the multi-channel impact on the role of sales forces and the way in which they are managed in a B2B context. Design/methodology/approach The authors compare the conclusion of their literature review with an analysis of the sales strategy of a leading cosmetic brand. They conducted their study by applying a multi-method qualitative research approach, which includes semi-structured interviews with managers and a research action performed by accompanying five salespeople on visits to their professional clients. Findings The results suggest that the development of a multi-channel context encourages salespeople to focus less on sales and order taking and more on advising clients about how best to develop their businesses. The multi-channel evolution seems to have enabled a transition of the role of sales people from a sales function to a function of a provider of personalized advice in the fields of business development, team management, salon promotions and merchandising. Research limitations/implications The quest for coherence, particularly at the international level, would justify a broadening of our study to include the impact of multi-channels sales on the market positioning of the brand and of other brands in different industrial sectors. Practical implications The expansion of the multi-channel sales approach implies that managers are obliged to seek a convergence, or at least a degree of coherence between the different channels. This strategy can be used to promote an effective integration of channels at the international level into a single, reliable distribution system that avoids all forms of cannibalization. The Omni-channel strategy implies shifting the emphasis in the channel and moving from a focus on direct sales to the professional client (“selling-in”) to a stress on direct sales to the end user (“selling-out”). Originality/value This article provides an original analytical approach to highlighting training methods and systems of remuneration that will help sales forces to manage the inter-channel migration of their customers. Salespeople will then be able to view the future Omni-channel context as an opportunity to improve the status of their role.

APA, Harvard, Vancouver, ISO, and other styles

34

He, Jianjia, Gang Liu, Thi Hoai Thuong Mai, and Ting Ting Li. "Research on the Allocation of 3D Printing Emergency Supplies in Public Health Emergencies." Frontiers in Public Health 9 (March26, 2021). http://dx.doi.org/10.3389/fpubh.2021.657276.

Full text

Abstract:

Significant public health emergencies greatly impact the global supply chain system of production and cause severe shortages in personal protective and medical emergency supplies. Thus, rapid manufacturing, scattered distribution, high design degrees of freedom, and the advantages of the low threshold of 3D printing can play important roles in the production of emergency supplies. In order to better realize the efficient distribution of 3D printing emergency supplies, this paper studies the relationship between supply and demand of 3D printing equipment and emergency supplies produced by 3D printing technology after public health emergencies. First, we fully consider the heterogeneity of user orders, 3D printing equipment resources, and the characteristics of diverse production objectives in the context of the emergent public health environment. The multi-objective optimization model for the production of 3D printing emergency supplies, which was evaluated by multiple manufacturers and in multiple disaster sites, can maximize time and cost benefits of the 3D printing of emergency supplies. Then, an improved non-dominated sorting genetic algorithm (NSGA-II) to solve the multi-objective optimization model is developed and compared with the traditional NSGA-II algorithm analysis. It contains more than one solution in the Pareto optimal solution set. Finally, the effectiveness of 3D printing is verified by numerical simulation, and it is found that it can solve the matching problem of supply and demand of 3D printing emergency supplies in public health emergencies.

APA, Harvard, Vancouver, ISO, and other styles

35

Cesarini, Paul. "‘Opening’ the Xbox." M/C Journal 7, no.3 (July1, 2004). http://dx.doi.org/10.5204/mcj.2371.

Full text

Abstract:

“As the old technologies become automatic and invisible, we find ourselves more concerned with fighting or embracing what’s new”—Dennis Baron, From Pencils to Pixels: The Stage of Literacy Technologies What constitutes a computer, as we have come to expect it? Are they necessarily monolithic “beige boxes”, connected to computer monitors, sitting on computer desks, located in computer rooms or computer labs? In order for a device to be considered a true computer, does it need to have a keyboard and mouse? If this were 1991 or earlier, our collective perception of what computers are and are not would largely be framed by this “beige box” model: computers are stationary, slab-like, and heavy, and their natural habitats must be in rooms specifically designated for that purpose. In 1992, when Apple introduced the first PowerBook, our perception began to change. Certainly there had been other portable computers prior to that, such as the Osborne 1, but these were more luggable than portable, weighing just slightly less than a typical sewing machine. The PowerBook and subsequent waves of laptops, personal digital assistants (PDAs), and so-called smart phones from numerous other companies have steadily forced us to rethink and redefine what a computer is and is not, how we interact with them, and the manner in which these tools might be used in the classroom. However, this reconceptualization of computers is far from over, and is in fact steadily evolving as new devices are introduced, adopted, and subsequently adapted for uses beyond of their original purpose. Pat Crowe’s Book Reader project, for example, has morphed Nintendo’s GameBoy and GameBoy Advance into a viable electronic book platform, complete with images, sound, and multi-language support. (Crowe, 2003) His goal was to take this existing technology previously framed only within the context of proprietary adolescent entertainment, and repurpose it for open, flexible uses typically associated with learning and literacy. Similar efforts are underway to repurpose Microsoft’s Xbox, perhaps the ultimate symbol of “closed” technology given Microsoft’s propensity for proprietary code, in order to make it a viable platform for Open Source Software (OSS). However, these efforts are not forgone conclusions, and are in fact typical of the ongoing battle over who controls the technology we own in our homes, and how open source solutions are often at odds with a largely proprietary world. In late 2001, Microsoft launched the Xbox with a multimillion dollar publicity drive featuring events, commercials, live models, and statements claiming this new console gaming platform would “change video games the way MTV changed music”. (Chan, 2001) The Xbox launched with the following technical specifications: 733mhz Pentium III 64mb RAM, 8 or 10gb internal hard disk drive CD/DVD ROM drive (speed unknown) Nvidia graphics processor, with HDTV support 4 USB 1.1 ports (adapter required), AC3 audio 10/100 ethernet port, Optional 56k modem (TechTV, 2001) While current computers dwarf these specifications in virtually all areas now, for 2001 these were roughly on par with many desktop systems. The retail price at the time was $299, but steadily dropped to nearly half that with additional price cuts anticipated. Based on these features, the preponderance of “off the shelf” parts and components used, and the relatively reasonable price, numerous programmers quickly became interested in seeing it if was possible to run Linux and additional OSS on the Xbox. In each case, the goal has been similar: exceed the original purpose of the Xbox, to determine if and how well it might be used for basic computing tasks. If these attempts prove to be successful, the Xbox could allow institutions to dramatically increase the student-to-computer ratio in select environments, or allow individuals who could not otherwise afford a computer to instead buy and Xbox, download and install Linux, and use this new device to write, create, and innovate . This drive to literally and metaphorically “open” the Xbox comes from many directions. Such efforts include Andrew Huang’s self-published “Hacking the Xbox” book in which, under the auspices of reverse engineering, Huang analyzes the architecture of the Xbox, detailing step-by-step instructions for flashing the ROM, upgrading the hard drive and/or RAM, and generally prepping the device for use as an information appliance. Additional initiatives include Lindows CEO Michael Robertson’s $200,000 prize to encourage Linux development on the Xbox, and the Xbox Linux Project at SourceForge. What is Linux? Linux is an alternative operating system initially developed in 1991 by Linus Benedict Torvalds. Linux was based off a derivative of the MINIX operating system, which in turn was a derivative of UNIX. (Hasan 2003) Linux is currently available for Intel-based systems that would normally run versions of Windows, PowerPC-based systems that would normally run Apple’s Mac OS, and a host of other handheld, cell phone, or so-called “embedded” systems. Linux distributions are based almost exclusively on open source software, graphic user interfaces, and middleware components. While there are commercial Linux distributions available, these mainly just package the freely available operating system with bundled technical support, manuals, some exclusive or proprietary commercial applications, and related services. Anyone can still download and install numerous Linux distributions at no cost, provided they do not need technical support beyond the community / enthusiast level. Typical Linux distributions come with open source web browsers, word processors and related productivity applications (such as those found in OpenOffice.org), and related tools for accessing email, organizing schedules and contacts, etc. Certain Linux distributions are more or less designed for network administrators, system engineers, and similar “power users” somewhat distanced from that of our students. However, several distributions including Lycoris, Mandrake, LindowsOS, and other are specifically tailored as regular, desktop operating systems, with regular, everyday computer users in mind. As Linux has no draconian “product activation key” method of authentication, or digital rights management-laden features associated with installation and implementation on typical desktop and laptop systems, Linux is becoming an ideal choice both individually and institutionally. It still faces an uphill battle in terms of achieving widespread acceptance as a desktop operating system. As Finnie points out in Desktop Linux Edges Into The Mainstream: “to attract users, you need ease of installation, ease of device configuration, and intuitive, full-featured desktop user controls. It’s all coming, but slowly. With each new version, desktop Linux comes closer to entering the mainstream. It’s anyone’s guess as to when critical mass will be reached, but you can feel the inevitability: There’s pent-up demand for something different.” (Finnie 2003) Linux is already spreading rapidly in numerous capacities, in numerous countries. Linux has “taken hold wherever computer users desire freedom, and wherever there is demand for inexpensive software.” Reports from technology research company IDG indicate that roughly a third of computers in Central and South America run Linux. Several countries, including Mexico, Brazil, and Argentina, have all but mandated that state-owned institutions adopt open source software whenever possible to “give their people the tools and education to compete with the rest of the world.” (Hills 2001) The Goal Less than a year after Microsoft introduced the The Xbox, the Xbox Linux project formed. The Xbox Linux Project has a goal of developing and distributing Linux for the Xbox gaming console, “so that it can be used for many tasks that Microsoft don’t want you to be able to do. ...as a desktop computer, for email and browsing the web from your TV, as a (web) server” (Xbox Linux Project 2002). Since the Linux operating system is open source, meaning it can freely be tinkered with and distributed, those who opt to download and install Linux on their Xbox can do so with relatively little overhead in terms of cost or time. Additionally, Linux itself looks very “windows-like”, making for fairly low learning curve. To help increase overall awareness of this project and assist in diffusing it, the Xbox Linux Project offers step-by-step installation instructions, with the end result being a system capable of using common peripherals such as a keyboard and mouse, scanner, printer, a “webcam and a DVD burner, connected to a VGA monitor; 100% compatible with a standard Linux PC, all PC (USB) hardware and PC software that works with Linux.” (Xbox Linux Project 2002) Such a system could have tremendous potential for technology literacy. Pairing an Xbox with Linux and OpenOffice.org, for example, would provide our students essentially the same capability any of them would expect from a regular desktop computer. They could send and receive email, communicate using instant messaging IRC, or newsgroup clients, and browse Internet sites just as they normally would. In fact, the overall browsing experience for Linux users is substantially better than that for most Windows users. Internet Explorer, the default browser on all systems running Windows-base operating systems, lacks basic features standard in virtually all competing browsers. Native blocking of “pop-up” advertisem*nts is still not yet possible in Internet Explorer without the aid of a third-party utility. Tabbed browsing, which involves the ability to easily open and sort through multiple Web pages in the same window, often with a single mouse click, is also missing from Internet Explorer. The same can be said for a robust download manager, “find as you type”, and a variety of additional features. Mozilla, Netscape, Firefox, Konqueror, and essentially all other OSS browsers for Linux have these features. Of course, most of these browsers are also available for Windows, but Internet Explorer is still considered the standard browser for the platform. If the Xbox Linux Project becomes widely diffused, our students could edit and save Microsoft Word files in OpenOffice.org’s Writer program, and do the same with PowerPoint and Excel files in similar OpenOffice.org components. They could access instructor comments originally created in Microsoft Word documents, and in turn could add their own comments and send the documents back to their instructors. They could even perform many functions not yet capable in Microsoft Office, including saving files in PDF or Flash format without needing Adobe’s Acrobat product or Macromedia’s Flash Studio MX. Additionally, by way of this project, the Xbox can also serve as “a Linux server for HTTP/FTP/SMB/NFS, serving data such as MP3/MPEG4/DivX, or a router, or both; without a monitor or keyboard or mouse connected.” (Xbox Linux Project 2003) In a very real sense, our students could use these inexpensive systems previously framed only within the context of entertainment, for educational purposes typically associated with computer-mediated learning. Problems: Control and Access The existing rhetoric of technological control surrounding current and emerging technologies appears to be stifling many of these efforts before they can even be brought to the public. This rhetoric of control is largely typified by overly-restrictive digital rights management (DRM) schemes antithetical to education, and the Digital Millennium Copyright Act (DMCA). Combined,both are currently being used as technical and legal clubs against these efforts. Microsoft, for example, has taken a dim view of any efforts to adapt the Xbox to Linux. Microsoft CEO Steve Ballmer, who has repeatedly referred to Linux as a cancer and has equated OSS as being un-American, stated, “Given the way the economic model works - and that is a subsidy followed, essentially, by fees for every piece of software sold - our license framework has to do that.” (Becker 2003) Since the Xbox is based on a subsidy model, meaning that Microsoft actually sells the hardware at a loss and instead generates revenue off software sales, Ballmer launched a series of concerted legal attacks against the Xbox Linux Project and similar efforts. In 2002, Nintendo, Sony, and Microsoft simultaneously sued Lik Sang, Inc., a Hong Kong-based company that produces programmable cartridges and “mod chips” for the PlayStation II, Xbox, and Game Cube. Nintendo states that its company alone loses over $650 million each year due to piracy of their console gaming titles, which typically originate in China, Paraguay, and Mexico. (GameIndustry.biz) Currently, many attempts to “mod” the Xbox required the use of such chips. As Lik Sang is one of the only suppliers, initial efforts to adapt the Xbox to Linux slowed considerably. Despite that fact that such chips can still be ordered and shipped here by less conventional means, it does not change that fact that the chips themselves would be illegal in the U.S. due to the anticircumvention clause in the DMCA itself, which is designed specifically to protect any DRM-wrapped content, regardless of context. The Xbox Linux Project then attempted to get Microsoft to officially sanction their efforts. They were not only rebuffed, but Microsoft then opted to hire programmers specifically to create technological countermeasures for the Xbox, to defeat additional attempts at installing OSS on it. Undeterred, the Xbox Linux Project eventually arrived at a method of installing and booting Linux without the use of mod chips, and have taken a more defiant tone now with Microsoft regarding their circumvention efforts. (Lettice 2002) They state that “Microsoft does not want you to use the Xbox as a Linux computer, therefore it has some anti-Linux-protection built in, but it can be circumvented easily, so that an Xbox can be used as what it is: an IBM PC.” (Xbox Linux Project 2003) Problems: Learning Curves and Usability In spite of the difficulties imposed by the combined technological and legal attacks on this project, it has succeeded at infiltrating this closed system with OSS. It has done so beyond the mere prototype level, too, as evidenced by the Xbox Linux Project now having both complete, step-by-step instructions available for users to modify their own Xbox systems, and an alternate plan catering to those who have the interest in modifying their systems, but not the time or technical inclinations. Specifically, this option involves users mailing their Xbox systems to community volunteers within the Xbox Linux Project, and basically having these volunteers perform the necessary software preparation or actually do the full Linux installation for them, free of charge (presumably not including shipping). This particular aspect of the project, dubbed “Users Help Users”, appears to be fairly new. Yet, it already lists over sixty volunteers capable and willing to perform this service, since “Many users don’t have the possibility, expertise or hardware” to perform these modifications. Amazingly enough, in some cases these volunteers are barely out of junior high school. One such volunteer stipulates that those seeking his assistance keep in mind that he is “just 14” and that when performing these modifications he “...will not always be finished by the next day”. (Steil 2003) In addition to this interesting if somewhat unusual level of community-driven support, there are currently several Linux-based options available for the Xbox. The two that are perhaps the most developed are GentooX, which is based of the popular Gentoo Linux distribution, and Ed’s Debian, based off the Debian GNU / Linux distribution. Both Gentoo and Debian are “seasoned” distributions that have been available for some time now, though Daniel Robbins, Chief Architect of Gentoo, refers to the product as actually being a “metadistribution” of Linux, due to its high degree of adaptability and configurability. (Gentoo 2004) Specifically, the Robbins asserts that Gentoo is capable of being “customized for just about any application or need. ...an ideal secure server, development workstation, professional desktop, gaming system, embedded solution or something else—whatever you need it to be.” (Robbins 2004) He further states that the whole point of Gentoo is to provide a better, more usable Linux experience than that found in many other distributions. Robbins states that: “The goal of Gentoo is to design tools and systems that allow a user to do their work pleasantly and efficiently as possible, as they see fit. Our tools should be a joy to use, and should help the user to appreciate the richness of the Linux and free software community, and the flexibility of free software. ...Put another way, the Gentoo philosophy is to create better tools. When a tool is doing its job perfectly, you might not even be very aware of its presence, because it does not interfere and make its presence known, nor does it force you to interact with it when you don’t want it to. The tool serves the user rather than the user serving the tool.” (Robbins 2004) There is also a so-called “live CD” Linux distribution suitable for the Xbox, called dyne:bolic, and an in-progress release of Slackware Linux, as well. According to the Xbox Linux Project, the only difference between the standard releases of these distributions and their Xbox counterparts is that “...the install process – and naturally the bootloader, the kernel and the kernel modules – are all customized for the Xbox.” (Xbox Linux Project, 2003) Of course, even if Gentoo is as user-friendly as Robbins purports, even if the Linux kernel itself has become significantly more robust and efficient, and even if Microsoft again drops the retail price of the Xbox, is this really a feasible solution in the classroom? Does the Xbox Linux Project have an army of 14 year olds willing to modify dozens, perhaps hundreds of these systems for use in secondary schools and higher education? Of course not. If such an institutional rollout were to be undertaken, it would require significant support from not only faculty, but Department Chairs, Deans, IT staff, and quite possible Chief Information Officers. Disk images would need to be customized for each institution to reflect their respective needs, ranging from setting specific home pages on web browsers, to bookmarks, to custom back-up and / or disk re-imaging scripts, to network authentication. This would be no small task. Yet, the steps mentioned above are essentially no different than what would be required of any IT staff when creating a new disk image for a computer lab, be it one for a Windows-based system or a Mac OS X-based one. The primary difference would be Linux itself—nothing more, nothing less. The institutional difficulties in undertaking such an effort would likely be encountered prior to even purchasing a single Xbox, in that they would involve the same difficulties associated with any new hardware or software initiative: staffing, budget, and support. If the institutional in question is either unwilling or unable to address these three factors, it would not matter if the Xbox itself was as free as Linux. An Open Future, or a Closed one? It is unclear how far the Xbox Linux Project will be allowed to go in their efforts to invade an essentially a proprietary system with OSS. Unlike Sony, which has made deliberate steps to commercialize similar efforts for their PlayStation 2 console, Microsoft appears resolute in fighting OSS on the Xbox by any means necessary. They will continue to crack down on any companies selling so-called mod chips, and will continue to employ technological protections to keep the Xbox “closed”. Despite clear evidence to the contrary, in all likelihood Microsoft continue to equate any OSS efforts directed at the Xbox with piracy-related motivations. Additionally, Microsoft’s successor to the Xbox would likely include additional anticircumvention technologies incorporated into it that could set the Xbox Linux Project back by months, years, or could stop it cold. Of course, it is difficult to say with any degree of certainty how this “Xbox 2” (perhaps a more appropriate name might be “Nextbox”) will impact this project. Regardless of how this device evolves, there can be little doubt of the value of Linux, OpenOffice.org, and other OSS to teaching and learning with technology. This value exists not only in terms of price, but in increased freedom from policies and technologies of control. New Linux distributions from Gentoo, Mandrake, Lycoris, Lindows, and other companies are just now starting to focus their efforts on Linux as user-friendly, easy to use desktop operating systems, rather than just server or “techno-geek” environments suitable for advanced programmers and computer operators. While metaphorically opening the Xbox may not be for everyone, and may not be a suitable computing solution for all, I believe we as educators must promote and encourage such efforts whenever possible. I suggest this because I believe we need to exercise our professional influence and ultimately shape the future of technology literacy, either individually as faculty and collectively as departments, colleges, or institutions. Moran and Fitzsimmons-Hunter argue this very point in Writing Teachers, Schools, Access, and Change. One of their fundamental provisions they use to define “access” asserts that there must be a willingness for teachers and students to “fight for the technologies that they need to pursue their goals for their own teaching and learning.” (Taylor / Ward 160) Regardless of whether or not this debate is grounded in the “beige boxes” of the past, or the Xboxes of the present, much is at stake. Private corporations should not be in a position to control the manner in which we use legally-purchased technologies, regardless of whether or not these technologies are then repurposed for literacy uses. I believe the exigency associated with this control, and the ongoing evolution of what is and is not a computer, dictates that we assert ourselves more actively into this discussion. We must take steps to provide our students with the best possible computer-mediated learning experience, however seemingly unorthodox the technological means might be, so that they may think critically, communicate effectively, and participate actively in society and in their future careers. About the Author Paul Cesarini is an Assistant Professor in the Department of Visual Communication & Technology Education, Bowling Green State University, Ohio Email: pcesari@bgnet.bgsu.edu Works Cited http://xbox-linux.sourceforge.net/docs/debian.php>.Baron, Denis. “From Pencils to Pixels: The Stages of Literacy Technologies.” Passions Pedagogies and 21st Century Technologies. Hawisher, Gail E., and Cynthia L. Selfe, Eds. Utah: Utah State University Press, 1999. 15 – 33. Becker, David. “Ballmer: Mod Chips Threaten Xbox”. News.com. 21 Oct 2002. http://news.com.com/2100-1040-962797.php>. http://news.com.com/2100-1040-978957.html?tag=nl>. http://archive.infoworld.com/articles/hn/xml/02/08/13/020813hnchina.xml>. http://www.neoseeker.com/news/story/1062/>. http://www.bookreader.co.uk>.Finni, Scott. “Desktop Linux Edges Into The Mainstream”. TechWeb. 8 Apr 2003. http://www.techweb.com/tech/software/20030408_software. http://www.theregister.co.uk/content/archive/29439.html http://gentoox.shallax.com/. http://ragib.hypermart.net/linux/. http://www.itworld.com/Comp/2362/LWD010424latinlinux/pfindex.html. http://www.xbox-linux.sourceforge.net. http://www.theregister.co.uk/content/archive/27487.html. http://www.theregister.co.uk/content/archive/26078.html. http://www.us.playstation.com/peripherals.aspx?id=SCPH-97047. http://www.techtv.com/extendedplay/reviews/story/0,24330,3356862,00.html. http://www.wired.com/news/business/0,1367,61984,00.html. http://www.gentoo.org/main/en/about.xml http://www.gentoo.org/main/en/philosophy.xml http://techupdate.zdnet.com/techupdate/stories/main/0,14179,2869075,00.html. http://xbox-linux.sourceforge.net/docs/usershelpusers.html http://www.cnn.com/2002/TECH/fun.games/12/16/gamers.liksang/. Citation reference for this article MLA Style Cesarini, Paul. "“Opening” the Xbox" M/C: A Journal of Media and Culture <http://www.media-culture.org.au/0406/08_Cesarini.php>. APA Style Cesarini, P. (2004, Jul1). “Opening” the Xbox. M/C: A Journal of Media and Culture, 7, <http://www.media-culture.org.au/0406/08_Cesarini.php>

APA, Harvard, Vancouver, ISO, and other styles

36

Kredina, Anna. "Transformation of Fintech: Impact of POS and ATM on Non-Cash Payments." Eurasian Journal of Economic and Business Studies 2, no.60 (May12, 2021). http://dx.doi.org/10.47703/ejebs.v2i60.51.

Full text

Abstract:

The banking system of Kazakhstan has undergone many changes in recent years: traditional bank branches are no longer in such demand as 20 years ago. Now banks are intentionally closing branches and transferring their clients to the online format. These steps were especially justified in the context of the COVID-19 pandemic. At the same time, technology has grown: since the beginning of the emergence of point-of-sale (POS) terminals, Automated teller machines (ATM) have developed. To ensure the security of transactions, transfers are tied to an individual, and the transfers themselves have multi-factor identification. The necessity to have payment cards linked to the account is still relevant, even if the user uses applications on his mobile phone to make non-cash transfers. The purpose of this study is to identify the existence of a relationship between non-cash payments and proxy servers for non-cash banking in Kazakhstan. In this scientific study, two hypotheses (zero and alternative) were put forward and tested, of which one was later confirmed. Statistical monthly data cover the period 2004-2020, were obtained from the electronic repository of the Statistical Bulletin of the National Bank of Kazakhstan. To test the relationship of selected determinants have been used program SPSS and Microsoft Excel. Kolmogorov-Smirnov test was used for checking the normality of data distribution (revealed the normal distribution of the collected quantitative data). This made it possible to find the Pearson correlation coefficient. Further, in the course of the study, a correlation matrix was compiled. It was found the presence of a significant relationship between the amount of non-cash transfers, POS & ATM. This confirms the correct orientation of public policies towards the development of technical systems and the digitization of the economy. The results of this study are important for the banking system and policy dissemination of non-cash payments.

APA, Harvard, Vancouver, ISO, and other styles

37

Hookway, Nicholas, and Tim Graham. "‘22 Push-Ups for a Cause’: Depicting the Moral Self via Social Media Campaign #Mission22." M/C Journal 20, no.4 (August16, 2017). http://dx.doi.org/10.5204/mcj.1270.

Full text

Abstract:

IntroductionIn 2016, the online cause #Mission22 went viral on social media. Established to raise awareness about high suicide rates among US military veterans, the campaign involves users posting a video of themselves doing 22 push-ups for 22 days, and on some platforms, to donate and recruit others to do the same. Based on a ‘big data’ analysis of Twitter data (over 225,883 unique tweets) during the height of the campaign, this article uses #Mission22 as a site in which to analyse how people depict, self-represent and self-tell as moral subjects using social media campaigns. In addition to spotlighting how such movements are mobilised to portray moral selves in particular ways, the analysis focuses on how a specific online cause like #Mission22 becomes popularly supported from a plethora of possible causes and how this selection and support is shaped by online networks. We speculate that part of the reason why Mission22 went ‘viral’ in the highly competitive attention economies of social media environments was related to visual depictions of affective bodily, fitness and moral practices.Web 2.0 Culture: Self and Mass DepictionWeb 2.0 culture such as social networking sites (eg., Facebook; Instagram), the advent of video sharing technologies (eg., YouTube) and more recently, micro-blogging services like Twitter have created new and transformative spaces to create, depict and display identity. Web 2.0 is primarily defined by user-generated content and interaction, whereby users are positioned as both consumer and producers, or ‘produsers’ of Web content (Bruns and Schmidt). Challenging traditional “broadcast” media models, Web 2.0 gives users a platform to produce their own content and for “the many” to communicate “with the many” (Castells). The growth of mass self communication, supported by broadband and wireless technologies, gives unprecedented power to individuals and groups to depict and represent their identities and relationships to a potential global audience.The rise of user-generated communication technologies dovetails with broader analyses of the changing contours of self and identity in late-modern times. Individuals in the early decades of the 21st century must take charge for how they depict, portray and self-tell as distinctive, unique and individual subjects (Beck and Beck-Gernsheim; Giddens; Bauman). As contemporary lives become less bound to the strictures of tradition, community and religion, the self becomes a project to be worked out and developed. These theorists suggest that via processes of individualisation, detraditionalisation and globalisation, contemporary subjects have become disconnected from the traditional coordinates of community and are thus faced with the imperative of self-construction and reinvention (Elliott and Lemert).More recently, theoretical and empirical work has attempted to interpret and evaluate how networks of mass self-depiction powered by new digital and wireless technologies are reshaping identity practices. For some theorists, like Bauman (Consuming 2) and Turkle, Web 2.0 is a worrying trend. Bauman suggests in the “confessional society” – think reality TV, talk shows, social media – people are compelled to curate and reflect upon their lives in the public realm. These public acts of self-depiction are part of a move to treating the self as a brand to be consumed, “as products capable of drawing attention, and attracting demands and customers” (Bauman and Lyon 33). The consumer quality of new communications sees connections replace relationships as social bonds become short-term and brittle. Turkle makes a similar argument, suggesting that our preoccupation with online curation centres on controlling our identities and depicting “perfect” versions of ourselves. The result is diminished forms of intimacy and connection; we preach authenticity and realness but practice self-curation and self-stylisation.A more positive body of literature has examined how Web technologies work as tools for the formation of self. This literature is based on more close-up and detailed readings of particular platforms and practices rather than relying on sweeping claims about technology and social change. Following Foucault, Bakardjieva & Gaden argue that personal blogs and social networking site (SNS) profiles constitute a contemporary technology of the self, whereby users employ Web 2.0 technologies in everyday life as practices of self care and self-formation. In a similar way, Sauter argues that SNSs, and in particular Facebook, are tools for self-formation through the way in which status updates provide a contemporary form of self-writing. Eschewing the notion of social media activity as narcissistic or self-obsessive, Sauter argues that SNSs are a techno-social practice of self-writing that facilitate individuals to “form relations to self and others by exposing themselves to others and obtaining their feedback” (Sauter 836). Other research has explored young people’s sustained use of social media, particularly Facebook, and how these sites are used to tell and archive “growing up” narratives and key rites of passage (Robards and Lincoln).One area of research that has been overlooked is how people use social media to construct and depict moral identity. Following Sauter’s arguments about the self work that occurs through networked self-writing, we can extend this to include the ethical self work performed and produced through online depictions. One exception is work by Hookway which analyses how people use blogs – an earlier Web 2.0 form – to write and self-examine their moral experiences. This research shows how bloggers use blogging as a form of online self-writing to construct a do-it-yourself form of morality that emphasises the self, emotions, body and ideals of authenticity. Hookway highlights the idea that morality is less about obedience to a code of rules or following external laws to becoming a particular moral person through a set of self-practices. Paralleling broader shifts in identity construction, people are no longer bound to the inherited guidelines of the past, morality becomes a project to be worked out, designed and depicted in relation to Others (Hookway).In Foucault’s terms, morality involves a process of ethical self-stylisation – an “aesthetics of existence” – based on “the ethical work of the self on the self” (Foucault 91). “Care of the self” involves a “set of occupations” or “labours” that connect and link the self to the Other through guidance, counselling and communication (Foucault 50). For Foucault, self-creation and self-care imply “care for others” as individuals perform a mutual concern with achieving an “art of existence”. This is a reciprocated ethics that obligates the individual to care for others in order to help them care for themselves.This stylisation of the ethical self has been drastically reshaped by the new opportunities for self-expression, belonging and communication offered in our digitally networked society. Digital worlds and spaces create new multi-media modes for individuals and groups to depict, perform and communicate particular moral identities and positions. Web 2.0 technologies are seeing the boundaries between the private and public sphere collapse as more people are willing to share the most intimate part of their moral lives with a diverse mix of strangers, friends, family and associates.The confessional quality of online spaces provide a unique opportunity to analyse “lay morality” or everyday moral understandings, constructions and depictions and how this is co-produced in relation to new technological affordances. Following Sayer (951), morality is defined as “how people should treat others and be treated by them, which of course is crucial for their subjective and objective well-being”. Morality is understood as a relational and evaluative practice that involves being responsive to how people are faring and whether they are suffering or flourishing.In this article, we use the #Mission22 campaign – a campaign that went “viral” across multiple social media platforms – as a unique site to analyse and visualise lay moral depictions and constructions. Specifically, we analyse the #Mission22 campaign on Twitter using a big data analysis. Much of the empirical work on online self construction and depiction is either purely theoretical in the vein of Bauman, Turkle and Sauter or based on small qualitative samples such as the work by Lincoln and Robards and Author A. This article is unique not only in investigating the crafting of moral depictions in Web 2.0 forums but also in the scale of the textual and visual representation of mass moral self-depictions it captures and analyses. Big Data Analysis of #Mission22 on TwitterIn order to empirically examine the #Mission22 campaign on Twitter, we used the Twitter API to collect over three months of tweets that contained the campaign hashtag (from 20 Aug. 2016 to 1 Dec. 2016). This resulted in a dataset of 2,908,559 tweets, of which 225,883 were non-duplicated (i.e., some tweets were collected multiple times by the crawler).There were 3,230 user accounts participating during this period, with each user tweeting 70 times on average. As Figure 1 shows, a sizeable percentage of users were quite active at the height of the campaign, although there is clearly a number of users who only tweeted once or twice. More specifically, there were 1,232 users (or 38%) who tweeted at least 100 times, and on the other hand 1080 users (or 33%) who only tweeted two times or less. In addition, a tiny number of ‘power users’ (18 or 0.006%) tweeted more than 400 times during this period. Figure 1: Frequency distribution of #Mission22 tweets for each user in the datasetTo get a sense of what users were talking about during the campaign, we constructed a wordcloud out of the text data extracted from the tweets (see Figure 2). To provide more information and context, usernames (preceded with @) and hashtags (preceded with #) were included along with the words, providing a set of terms. As a result, the wordcloud also shows the user accounts and hashtags that were mentioned most often (note that #Mission22 was excluded from the data as it, by definition of the data collection process, has to occur in every tweet). In order to remove meaningless terms from the dataset we applied several text processing steps. First, all terms were converted to lowercase, such that “Veteran” and “veteran” are treated as the same term. Next, we applied a technique known as term frequency-inverse document frequency (tf-idf) to the tweet text data. Tf-idf effectively removes terms that occur so frequently that they provide no interesting information (e.g., the term “mission22”), and also terms that occur extremely infrequently. Finally, we removed English “stop words” from the text data, thereby eliminating common words such as “the” and “and”. Figure 2: Wordcloud of the #Mission22 tweet contentAs Figure 2 shows, the most frequent terms revolve around the campaign message and call-to-action for suicide awareness, including, for example, “day”, “veteran”, “support”, “push-ups”, “band”, “challenge”, “suicide”, “fight”, and “alone”. A number of user accounts are also frequently mentioned, which largely relate to the heavily retweeted users (discussed further below). Furthermore, alongside the central #mission22 hashtag, a number of other popular hashtags were in circulation during the campaign, including “#veteran”, “#americasmission”, “#22kill”, and “#22adayis22toomany”. Table 1 provides the top 50 most frequently occurring terms in decreasing order.Table 1: Top 50 words in the #Mission22 tweet content (decreasing order)1-1011-2021-3031-4041-50day@mrbernardedlong@uc_vetsnothingveteran#veteranbetter@kappasigmauceverysupporteverydaybelieve@ucthetachimissionpush-upschallengetodaytakehelp@sandratxassuicidehaulone#22kill@defensebaronveteransawarenessjustsay@the_usofightaccepted@piedmontlax#veterans@nbcnewsaloneptsdgoodweaknessbandvets22kwrong#nevertrumpcimmunity [sic]#americasmissionshoutoutgodwillA surprising finding of our study is that the vast majority of tweets are simply just retweets of other users. The number of retweets was 223,666, which accounts for about 99% of all tweets in the dataset. Even more surprising was that the vast majority of these retweets are from a single tweet. Indeed, 221,088 (or 98%) of all tweets in the dataset were retweets of the following tweet that was authored on 2 March 2015 by @SandraTXAS (see Figure 3). Clearly we can say that this tweet went ‘viral’ (Jenders et al) in the sense that it became frequently retweeted and gained an increasing amount of attention due to its cumulative popularity and visibility over time. Figure 3: #1 most retweeted #Mission22 tweet – @SandraTXAS (https://twitter.com/SandraTXAS)This highly retweeted or viral #Mission22 tweet provides a point of departure to examine what aspects of the tweet content influence the virality or popularity of #Mission22 tweets during the height of the campaign. To do this, we extracted the next nine most retweeted tweets from our dataset, providing an analysis of the “top 10” retweets (including the @SandraTXAS tweet above). Figure 4: #2 most retweeted - @mrbernarded (https://twitter.com/mrbernarded/status/776221040582295553)This tweet was retweeted 715 times in our dataset. Figure 5: #4 most retweeted - @Mission22 (https://twitter.com/Mission22/status/799872548863414272)This was retweeted 317 times in our dataset. Figure 6: #4 most retweeted - @UCThetaChi (https://twitter.com/UCThetaChi/status/784775641430384640)This was retweeted 180 times in our dataset. Figure 7: #5 most retweeted - @PamKeith2016 (https://twitter.com/PamKeith2016/status/782975576550305792)This was retweeted 121 times in our dataset. Figure 8: #6 most retweeted - @PiedmontLax (https://twitter.com/PiedmontLax/status/770749891698122752)This was retweeted 105 times in our dataset. Figure 9: #7 most retweeted - @PiedmontLax (https://twitter.com/PiedmontLax/status/771181070066692098) This was retweeted 78 times in our dataset. Figure 10: #8 most retweeted - @PatriotBrother (https://twitter.com/PatriotBrother/status/804387050728394752) This was retweeted 59 times in our dataset. Figure 11: #9 most retweeted - @alexgotayjr (https://twitter.com/alexgotayjr/status/787112936644849664) This was retweeted 49 times in our dataset. Figure 12: #10 most retweeted - @csjacobson89 (https://twitter.com/csjacobson89/status/772921614044233729) This was retweeted 45 times in our dataset.DiscussionThis article has provided the first “big data” analysis of the #Mission22 movement that went viral across multiple social media platforms in 2016. We began by arguing that Web 2.0 has ushered in profound changes to how people depict and construct identities that articulate with wider transformations in self and identity in conditions of late-modernity. The “confessional” quality of Web 2.0 means individuals and groups are presented with unprecedented opportunities to “mass self-depict” through new communication and Internet technologies. We suggest that the focus on how Web technologies are implicated in the formation of moral subjectivities is something that has been overlooked in the extant research on identity and Web 2.0 technologies.Filling this gap, we used the #Mission22 movement on Twitter as an empirical site to analyse how contemporary subjects construct and visually depict moral identities in online contexts. A central finding of our analysis of 225883 Twitter posts is that most engagement with #Mission22 was through retweeting. Our data show that retweets were by far the most popular way to interact and engage with the movement. In other words, most people were not producing original or new content in how they participated in the movement but were re-sharing – re-depicting – what others had shared. This finding highlights the importance of paying attention to the architectural affordances of social media platforms, in this case, the affordances of the ‘retweet’ button, and how they shape online identity practices and moral expression. We use moral expression here as a broad term to capture the different ways individuals and groups make moral evaluations based on a responsiveness to how people are faring and whether they are suffering or flourishing (Sayer). This approach provides an emic account of everyday morality and precludes, for example, wider philosophical debates about whether patriotism or nationalistic solidarity can be understood as moral values.The prominence of the retweet in driving the shape and nature of #Mission22 raises questions about the depth of moral engagement being communicated. Is the dominance of the retweet suggestive of a type of “moral slacktivism”? Like its online political equivalent, does the retweet highlight a shallow and cursory involvement with a cause or movement? Did online engagement translate to concrete moral actions such as making a donation to the cause or engaging in some other form of civic activity to draw attention to the movement? These questions are beyond the scope of this article but it is interesting to consider the link between the affordances of the platform, capacity for moral expression and how this translates to face-to-face moral action. Putting aside questions of depth, people are compelled not to ignore these posts, they move from “seeing” to “posting”, to taking action within the affordances of the architectural platform.What then is moving Twitter users to morally engage with this content? How did this movement go viral? What helped bust this movement out of the “long tail distribution” which characterises most movements – that is, few movements “take-off” and become durable within the congested attention economies of social media environments. The Top 10 most retweeted tweets provide powerful answers here. All of them feature highly emotive and affective visual depictions, either high impact photos and statements, or videos of people/groups doing pushups in solidarity together. The images and videos align affective, bodily and fitness practices with nationalistic and patriotic themes to produce a powerful and moving moral co*cktail. The Top 50 words also capture the emotionally evocative use of moral language: words like: alone, fight, challenge, better, believe, good, wrong, god, help, mission, weakness and will.The emotional and embodied visual depictions that characterise the the Top 10 retweets and Top 50 words highlight how moral identity is not just a cerebral practice, but one that is fundamentally emotional and bodily. We do morality not just with our minds and heads but also with our bodies and our hearts. Part of the power of this movement, then, is the way it mobilises interest and involvement with the movement through a physical and embodied practice – doing push-ups. Visually depicting oneself doing push-ups online is a powerful display of morality identity. The “lay morality” being communicated is that not only are you somebody who cares about the flourishing and suffering of Others, you are also a fit, active and engaged citizen. And of course, the subject who actively takes responsibility for their health and well-being is highly valued in neoliberal risk contexts (Lupton).There is also a strong gendered dimensions to the visual depictions used in #Mission22. All of the Top 10 retweets feature images of men, mostly doing push-ups in groups. In the case of the second most popular retweet, it is two men in suits doing push-ups while three sexualised female singers “look-on” admiringly. Further analysis needs to be done to detail the gendered composition of movement participation, but it is interesting to speculate whether men were more likely to participate. The combination of demonstrating care for Other via a strong assertion of physical strength makes this a potentially more masculinised form of moral self-expression.Overall, Mission22 highlights how online self-work and cultivation can have a strong moral dimension. In Foucault’s language, the self-work involved in posting a video or image of yourself doing push-ups can be read as “an intensification of social relations”. It involves an ethics that is about self-creation through visual and textual depictions. Following the more pessimistic line of Bauman or Turkle, posting images of oneself doing push-ups might be seen as evidence of narcissism or a consumerist self-absorption. Rather than narcissism, we want to suggest that Mission22 highlights how a self-based moral practice – based on bodily, emotional and visual depictions – can extend to Others in an act of mutual care and exchange. Again Foucault helps clarify our argument: “the intensification of the concern for the self goes hand in hand with a valorisation of the Other”. What our work does, is show how this operates empirically on a large-scale in the new confessional contexts of Web 2.0 and its cultures of mass self-depiction. ReferencesBakardjieva, Maria, and Georgia Gaden. “Web 2.0 Technologies of the Self.” Philosophy & Technology 25.3 (2012): 399–413.Bauman, Zygmunt. Liquid Modernity. Cambridge: Polity, 2000.———. Consuming Life. Cambridge: Polity, 2007.———, and David Lyon. Liquid Surveillance. Cambridge: Polity, 2013.Beck, Ulrich, and Elizabeth Beck-Gernsheim. Individualisation. London: Sage, 2001.Bruns, Axel, and Jan-Hinrik Schmidt. “Produsage: A Closer Look at Continuing Developments.” New Review of Hypermedia and Multimedia 17.1 (2011): 3–7.Dutta-Bergman, Mohan J. “Primary Sources of Health Information: Comparisons in the Domain of Health Attitudes, Health Cognitions, and Health Behaviors.” Health Communication 16.3 (2004): 273–288.Elliott, Anthony, and Charles Lemert. The New Individualism: The Emotional Costs of Globalization. New York: Routledge, 2006.Foucault, Michel. The Care of the Self: The History of Sexuality. Vol. 3. New York: Random House, 1986.Giddens, Anthony. Modernity and Self-Identity: Self and Society in the Late Modern Age. Cambridge: Polity, 1991.Hookway, Nicholas. “The Moral Self: Class, Narcissism and the Problem of Do-It-Yourself Moralities.” The Sociological Review, 15 Mar. 2017. <http://journals.sagepub.com/doi/abs/10.1177/0038026117699540?journalCode=sora>.Jenders, Maximilian, et al. “Analyzing and Predicting Viral Tweets.” Proceedings of the 22nd International Conference on World Wide Web (WWW). Rio de Janeiro, 13-17 May 2013.Kata, Anna. “Anti-Vaccine Activists, Web 2.0, and the Postmodern Paradigm: An Overview of Tactics and Tropes Used Online by the Anti-Vaccination Movement.” Vaccine 30.25 (2012): 3778–89.Lincoln, Sian, and Brady Robards. “Editing the Project of the Self: Sustained Facebook Use and Growing Up Online.” Journal of Youth Studies 20.4 (2017): 518–531.Lupton, Deborah. The Imperative of Health: Public Health and the Regulated Body. London: Sage, 1995.Sauter, Theresa. ‘“What's on Your Mind?’ Writing on Facebook as a Tool for Self-Formation.” New Media & Society 16.5 (2014): 823–839.Sayer, Andrew. Why Things Matter to People: Social Science, Values and Ethical Life. Cambridge: Cambridge University Press, 2011.Smith, Gavin J.D., and Pat O’Malley. “Driving Politics: Data-Driven Governance and Resistance.” The British Journal of Criminology 56.1 (2016): 1–24.Turkle, Sherry. Reclaiming Conversation: The Power of Talk in a Digital Age. Penguin: New York, 2015.

APA, Harvard, Vancouver, ISO, and other styles

38

Nansen, Bjorn. "Accidental, Assisted, Automated: An Emerging Repertoire of Infant Mobile Media Techniques." M/C Journal 18, no.5 (October14, 2015). http://dx.doi.org/10.5204/mcj.1026.

Full text

Abstract:

Introduction It is now commonplace for babies to begin their lives inhabiting media environments characterised by the presence, distribution, and mobility of digital devices and screens. Such arrangements can be traced, in part, to the birth of a new regime of mobile and touchscreen media beginning with the release of the iPhone in 2007 and the iPad in 2010, which stimulated a surge in household media consumption, underpinned by broadband and wireless Internet infrastructures. Research into these conditions of ambient mediation at the beginnings of life, however, is currently dominated by medical and educational literature, largely removed from media studies approaches that seek to understand the everyday contexts of babies using media. Putting aside discourses of promise or peril familiar to researchers of children’s media (Buckingham; Postman), this paper draws on ongoing research in both domestic and social media settings exploring infants’ everyday encounters and entanglements with mobile media and communication technologies. The paper identifies the ways infants’ mobile communication is assembled and distributed through touchscreen interfaces, proxy parent users, and commercial software sorting. It argues that within these interfacial, intermediary, and interactive contexts, we can conceptualise infants’ communicative agency through an emerging repertoire of techniques: accidental, assisted and automated. This assemblage of infant communication recognises that children no longer live with but in media (Deuze), which underscores the impossibility of a path of media resistance found in medical discourses of ‘exposure’ and restriction, and instead points to the need for critical and ethical responses to these immanent conditions of infant media life. Background and Approach Infants, understandably, have largely been excluded from analyses of mobile mediality given their historically limited engagement with or capacity to use mobile media. Yet, this situation is undergoing change as mobile devices become increasingly prominent in children’s homes (OfCom; Rideout), and as touchscreen interfaces lower thresholds of usability (Buckleitner; Hourcade et al.). The dominant frameworks within research addressing infants and media continue to resonate with long running and widely circulated debates in the study of children and mass media (Wartella and Robb), responding in contradictory ways to what is seen as an ever-increasing ‘technologization of childhood’ (McPake, Plowman and Stephen). Education research centres on digital literacy, emphasising the potential of mobile computing for these future digital learners, labourers, and citizens (McPake, Plowman and Stephen). Alternatively, health research largely positions mobile media within the rubric of ‘screen time’ inherited from older broadcast models, with paediatric groups continuing to caution parents about the dangers of infants’ ‘exposure’ to electronic screens (Strasburger and Hogan), without differentiating between screen types or activities. In turn, a range of digital media channels seek to propel or profit from infant media culture, with a number of review sites, YouTube channels and tech blogs promoting or surveying the latest gadgets and apps for babies. Within media studies, research is beginning to analyse the practices, conceptions and implications of digital interfaces and content for younger children. Studies are, for example, quantifying the devices, activities, and time spent by young children with mobile devices (Ofcom; Rideout), reviewing the design and marketing of children’s mobile application software products (e.g. Shuler), analysing digital content shared about babies on social media platforms (Kumar & Schoenebeck; Morris), and exploring emerging interactive spaces and technologies shaping young children’s ‘postdigital’ play (Giddings; Jayemanne, Nansen and Apperley). This paper extends this growing area of research by focusing specifically on infants’ early encounters, contexts, and configurations of mobile mediality, offering some preliminary analysis of an emerging repertoire of mobile communication techniques: accidental, assisted, and automated. That is, through infants playing with devices and accidentally activating them; through others such as parents assisting use; and through software features in applications that help to automate interaction. This analysis draws from an ongoing research project exploring young children’s mobile and interactive media use in domestic settings, which is employing ethnographic techniques including household technology tours and interviews, as well as participant observation and demonstrations of infant media interaction. To date 19 families, with 31 children aged between 0 and 5, located in Melbourne, Australia have participated. These participating families are largely hom*ogeneous and privileged; though are a sample of relatively early and heavy adopters that reveal emerging qualities about young children’s changing media environments and encounters. This approach builds on established traditions of media and ethnographic research on technology consumption and use within domestic spaces (e.g. Mackay and Ivey; Silverstone and Hirsch), but turns to the digital media encountered by infants, the geographies and routines of these encounters, and how families mediate these encounters within the contexts of home life. This paper offers some preliminary findings from this research, drawing mostly from discussions with parents about their babies’ use of digital, mobile, and touchscreen media. In this larger project, the domestic and family research is accompanied by the collection of online data focused on the cultural context of, and content shared about, infants’ mobile media use. In this paper I report on social media analysis of publicly shared images tagged with #babyselfie queried from Instagram’s API. I viewed all publicly shared images on Instagram tagged with #babyselfie, and collected the associated captions, comments, hashtags, and metadata, over a period of 48 hours in October 2014, resulting in a dataset of 324 posts. Clearly, using this data for research purposes raises ethical issues about privacy and consent given the posts are being used in an unintended context from which they were originally shared; something that is further complicated by the research focus on young children. These issues, in which the ease of extracting online data using digital methods research (Rogers), needs to be both minimised and balanced against the value of the research aims and outcomes (Highfield and Leaver). To minimise risks, captions and comments cited in this paper have been de-identified; whist the value of this data lies in complementing and contextualising the more ethnographically informed research, despite perceptions of incompatibility, through analysis of the wider cultural and mediated networks in which babies’ digital lives are now shared and represented. This field of cultural production also includes analysis of examples of children’s software products from mobile app stores that support baby image capture and sharing, and in particular in this paper discussion of the My Baby Selfie app from the iTunes App Store and the Baby Selfie app from the Google Play store. The rationale for drawing on these multiple sources of data within the larger project is to locate young children’s digital entanglements within the diverse places, platforms and politics in which they unfold. This research scope is limited by the constraints of this short paper, however different sources of data are drawn upon here in order to identify, compare, and contextualise the emerging themes of accidental, assisted, and automated. Accidental Media Use The domestication and aggregation of mobile media in the home, principally laptops, mobile phones and tablet computers has established polymediated environments in which infants are increasingly surrounded by mobile media; in which they often observe their parents using mobile devices; and in which the flashing of screens unsurprisingly draws their attention. Living within these ambient media environments, then, infants often observe, find and reach for mobile devices: on the iPad or whatever, then what's actually happening in front of them, then naturally they'll gravitate towards it. These media encounters are animated by touchscreens interfaces that are responsive to the gestural actions of infants. Conversely, touchscreen interfaces drive attempts to swipe legacy media screens. Underscoring the nomenclature of ‘natural user interfaces’ within the design and manufacturer communities, screens lighting up through touch prompts interest, interaction, and even habituation through gestural interaction, especially swiping: It's funny because when she was younger she would go up the T.V. and she would try swiping to turn the channel.They can grab it and start playing with it. It just shows that it's so much part of their world … to swipe something. Despite demonstrable capacities of infants to interact with mobile screens, discussions with parents revealed that accidental forms of media engagement were a more regular consequence of these ambient contexts, interfacial affordances and early encounters with mobile media. It was not uncommon for infants to accidentally swipe and activate applications, to temporarily lock the screen, or even to dial contacts: He didn't know the password, and he just kept locking it … find it disabled for 15 minutes.If I've got that on YouTube, they can quite quickly get on to some you know [video] … by pressing … and they don't do it on purpose, they're just pushing random buttons.He does Skype calls! I think he recognizes their image, the icon. Then just taps it and … Similarly, in the analysis of publicly shared images on Instagram tagged with #babyselfie, there were instances in which it appeared infants had accidentally taken photos with the cameraphone based on the image content, photo framing or descriptions in the caption. Many of these photos showed a baby with an arm in view reaching towards the phone in a classic trope of a selfie image; others were poorly framed shots showing parts of baby faces too close to the camera lens suggesting they accidentally took the photograph; whilst most definitive was many instances in which the caption of the image posted by parents directly attributed the photographic production to an infant: Isabella's first #babyselfie She actually pushed the button herself! My little man loves taking selfies lol Whilst, then, the research identified many instances in which infants accidentally engaged in mobile media use, sometimes managing to communicate with an unsuspecting interlocutor, it is important to acknowledge such encounters could not have emerged without the enabling infrastructure of ambient media contexts and touchscreen interfaces, nor observed without studying this infrastructure utilising materially-oriented ethnographic perspectives (Star). Significantly, too, was the intermediary role played by parents. With parents acting as intermediaries in household environments or as proxy users in posting content on their behalf, multiple forms of assisted infant communication were identified. Assisted Media Use Assisted communication emerged from discussions with parents about the ways, routines, and rationale for making mobile media available to their children. These sometimes revolved around keeping their child engaged whilst they were travelling as a family – part of what has been described as the pass-back effect – but were more frequently discussed in terms of sharing and showing digital content, especially family photographs, and in facilitating infant mediated communication with relatives abroad: they love scrolling through my photos on my iPhone …We quite often just have them [on Skype] … have the computers in there while we're having dinner … the laptop will be there, opened up at one end of the table with the family here and there will be my sister having breakfast with her family in Ireland … These forms of parental mediated communication did not, however, simply situate or construct infants as passive recipients of their parents’ desires to make media content available or their efforts to establish communication with extended family members. Instead, the research revealed that infants were often active participants in these processes, pushing for access to devices, digital content, and mediated communication. These distributed relations of agency were expressed through infants verbal requests and gestural urging; through the ways parents initiated use by, for example, unlocking a device, preparing software, or loading an application, but then handed them over to infants to play, explore or communicate; and through wider networks of relations in which others including siblings, acted as proxies or had a say in the kinds of media infants used: she can do it, once I've unlocked … even, even with iView, once I'm on iView she can pick her own show and then go to the channel she wants to go to.We had my son’s birthday and there were some photos, some footage of us singing happy birthday and the little one just wants to watch it over and over again. She thinks it's fantastic watching herself.He [sibling] becomes like a proxy user … with the second one … they don't even need the agency because of their sibling. Similarly, the assisted communication emerging from the analysis of #babyselfie images on Instagram revealed that parents were not simply determining infant media use, but often acting as proxies on their behalf. #Selfie obsessed baby. Seriously though. He won't stop. Insists on pressing the button and everything. He sees my phone and points and says "Pic? Pic?" I've created a monster lol. In sharing this digital content on social networks, parents were acting as intermediaries in the communication of their children’s digital images. Clearly they were determining the platforms and networks where these images were published online, yet the production of these images was more uncertain, with accidental self-portraits taken by infants suggesting they played a key role in the circuits of digital photography distribution (van Dijck). Automated Media Use The production, archiving, circulation and reception of these images speaks to larger assemblages of media in which software protocols and algorithms are increasingly embedded in and help to configure everyday life (e.g. Chun; Gillespie), including young children’s media lives (Ito). Here, software automates process of sorting and shaping information, and in doing so both empowers and governs forms of infant media conduct. The final theme emerging from the research, then, is the identification of automated forms of infant mobile media use enabled through software applications and algorithmic operations. Automated techniques of interaction emerged as part of the repertoire of infant mobile mediality and communication through observations and discussions during the family research, and through surveying commercial software applications. Within family discussions, parents spoke about the ways digital databases and applications facilitated infant exploration and navigation. These included photo galleries stored on mobile devices, as well as children’s Internet television services such as the Australian Broadcasting Corporation’s catch-up online TV service, iView, which are visually organised and easily scrollable. In addition, algorithmic functions for sorting, recommending and autoplay on the video-sharing platform YouTube meant that infants were often automatically delivered an ongoing stream of content: They just keep watching it [YouTube]. So it leads on form the other thing. Which is pretty amazing, that's pretty interactive.Yeah, but the kids like, like if they've watched a YouTube clip now, they'll know to look down the next column to see what they want to play next … you get suggestions there so. Forms of automated communication specifically addressing infants was also located in examples of children’s software products from mobile app stores: the My Baby Selfie app from the iTunes App Store and the Baby Selfie app from the Google Play store. These applications are designed to support baby image capture and sharing, promising to “allow your baby to take a photo of him himself [sic]” (Giudicelli), based on automated software features that use sounds and images to capture a babies attention and touch sensors to activate image capture and storage. In one sense, these applications may appear to empower infants to participate in the production of digital content, namely selfies, yet they also clearly distribute this agency with and through mobile media and digital software. Moreover, they imply forms of conduct, expectations and imperatives around the possibilities of infant presence in a participatory digital culture. Immanent Ethic and Critique Digital participation typically assumes a degree of individual agency in deciding what to share, post, or communicate that is not typically available to infants. The emerging communicative practices of infants detailed above suggests that infants are increasingly connecting, however this communicative agency is distributed amongst a network of ambient devices, user-friendly interfaces, proxy users, and software sorting. Such distributions reflect conditions Deuze has noted, that we do not live with but in media. He argues this ubiquity, habituation, and embodiment of media and communication technologies pervade and constitute our lives becoming effectively invisible, negating the possibility of an outside from which resistance can be mounted. Whilst, resistance remains a solution promoted in medical discourses and paediatric advice proposing no ‘screen time’ for children aged below two (Strasburger and Hogan), Deuze’s thesis suggests this is ontologically futile and instead we should strive for a more immanent relation that seeks to modulate choices and actions from within our media life: finding “creative ways to wield the awesome communication power of media both ethically and aesthetically” ("Unseen" 367). An immanent ethics and a critical aesthetics of infant mediated life can be located in examples of cultural production and everyday parental practice addressing the arrangements of infant mobile media and communication discussed above. For example, an article in the Guardian, ‘Toddlers pose a serious risk to smartphones and tablets’ parodies moral panics around children’s exposure to media by noting that media devices are at greater risk of physical damage from children handling them, whilst a design project from the Eindhoven Academy – called New Born Fame – built from soft toys shaped like social media logos, motion and touch sensors that activate image capture (much like babyselfie apps), but with automated social media sharing, critically interrogates the ways infants are increasingly bound-up with the networked and algorithmic regimes of our computational culture. Finally, parents in this research revealed that they carefully considered the ethics of media in their children’s lives by organising everyday media practices that balanced dwelling with new, old, and non media forms, and by curating their digitally mediated interactions and archives with an awareness they were custodians of their children’s digital memories (Garde-Hansen et al.). I suggest these examples work from an immanent ethical and critical position in order to make visible and operate from within the conditions of infant media life. Rather than seeking to deny or avoid the diversity of encounters infants have with and through mobile media in their everyday lives, this analysis has explored the ways infants are increasingly configured as users of mobile media and communication technologies, identifying an emerging repertoire of infant mobile communication techniques. The emerging practices of infant mobile communication outlined here are intertwined with contemporary household media environments, and assembled through accidental, assisted, and automated relations of living with mobile media. Moreover, such entanglements of use are both represented and discursively reconfigured through multiple channels, contexts, and networks of public mediation. Together, these diverse contexts and forms of conduct have implications for both studying and understanding the ways babies are emerging as active participants and interpellated subjects within a continually expanding digital culture. Acknowledgments This research was supported with funding from the Australian Research Council (ARC) Discovery Early Career Researcher Award (DE130100735). I would like to express my appreciation to the children and families involved in this study for their generous contribution of time and experiences. References Buckingham, David. After the Death of Childhood: Growing Up in the Age of Electronic Media. Polity Press: Oxford, 2000. Buckleitner, Warren. “A Taxonomy of Multi-Touch Interaction Styles, by Stage.” Children's Technology Review 18.11 (2011): 10-11. Chun, Wendy. Programmed Visions: Software and Memory. Cambridge: MIT Press, 2011. Deuze, Mark. “Media Life.” Media, Culture and Society 33.1 (2011): 137-148. Deuze, Mark. “The Unseen Disappearance of Invisible Media: A Response to Sebastian Kubitschko and Daniel Knapp.” Media, Culture and Society 34.3 (2012): 365-368. Garde-Hansen, Joanne, Andrew Hoskins and Anna Reading. Save as … Digital Memories. Hampshire: Palgrave Macmillan, 2009. Giddings, Seth. Gameworlds: Virtual Media and Children’s Everyday Play. New York: Bloomsbury, 2014. Gillespie, Tarleton. “The Relevance of Algorithms.” Media Technologies: Essays on Communication, Materiality, and Society. Eds. Tarelton Gillespie, Pablo Boczkowski and Kirsten Foot. Cambridge: MIT Press, 2014. Giudicelli, Patrick. "My Baby Selfie." iTunes App Store. Apple Inc., 2015. Highfield, Tim, and Tama Leaver. “A Methodology for Mapping Instagram Hashtags.” First Monday 20.1 (2015). Hourcade, Juan Pablo, Sarah Mascher, David Wu, and Luiza Pantoja. “Look, My Baby Is Using an iPad! An Analysis of Youtube Videos of Infants and Toddlers Using Tablets.” Proceedings of CHI 15. New York: ACM Press, 2015. 1915–1924. Ito, Mizuko. Engineering Play: A Cultural History of Children’s Software. Cambridge: MIT Press, 2009. Jayemanne, Darshana, Bjorn Nansen and Thomas Apperley. “Post-Digital Play and the Aesthetics of Recruitment.” Proceedings of Digital Games Research Association (DiGRA) 2015. Lüneburg, 14-17 May 2015. Kumar, Priya, and Sarita Schoenebeck. “The Modern Day Baby Book: Enacting Good Mothering and Stewarding Privacy on Facebook.” Proceedings of CSCW 2015. Vancouver, 14-18 March 2015. Mackay, Hugh, and Darren Ivey. Modern Media in the Home: An Ethnographic Study. Rome: John Libbey, 2004. Morris, Meredith. “Social Networking Site Use by Mothers of Young Children.” Proceedings of CSCW 2014. 1272-1282. OfCom. Children and Parents: Media Use and Attitudes Report. London: OfCom, 2013. McPake, Joanna, Lydia Plowman and Christine Stephen. "The Technologisation of Childhood? Young Children and Technology in The Home.” Children and Society 24.1 (2010): 63–74. Postman, Neil. Technopoly: The Surrender of Culture to Technology. New York: Vintage, 1993. Rideout, Victoria. Zero to Eight: Children’s Media Use in America 2013. Common Sense Media, 2013. Rogers, Richard. Digital Methods. Boston. MIT Press, 2013. Silverstone, Roger, and Eric Hirsch (eds). Consuming Technologies: Media and Information in Domestic Spaces. London: Routledge, 1992. Shuler, Carly. iLearn: A Content Analysis of the iTunes App Store’s Education Section. New York: The Joan Ganz Cooney Center at Sesame Workshop, 2009. Star, Susan Leigh. “The Ethnography of Infrastructure.” American Behavioral Scientist 43.3 (1999): 377–391. Strasburger, Victor, and Marjorie Hogan. “Policy Statement from the American Academy of Pediatrics: Children, Adolescents, and the Media.” Pediatrics 132 (2013): 958-961. Van Dijck, José. “Digital Photography: Digital Photography: Communication, Identity, Memory.” Visual Communication 7.1 (2008): 57-76. Wartella, Ellen, and Michael Robb. “Historical and Recurring Concerns about Children’s Use of the Mass Media.” The Handbook of Children, Media, and Development. Eds. Sandra Calvert and Barbara Wilson. Malden: Blackwell, 2008.

APA, Harvard, Vancouver, ISO, and other styles

39

Goggin, Gerard. "‘mobile text’." M/C Journal 7, no.1 (January1, 2004). http://dx.doi.org/10.5204/mcj.2312.

Full text

Abstract:

Mobile In many countries, more people have mobile phones than they do fixed-line phones. Mobile phones are one of the fastest growing technologies ever, outstripping even the internet in many respects. With the advent and widespread deployment of digital systems, mobile phones were used by an estimated 1, 158, 254, 300 people worldwide in 2002 (up from approximately 91 million in 1995), 51. 4% of total telephone subscribers (ITU). One of the reasons for this is mobility itself: the ability for people to talk on the phone wherever they are. The communicative possibilities opened up by mobile phones have produced new uses and new discourses (see Katz and Aakhus; Brown, Green, and Harper; and Plant). Contemporary soundscapes now feature not only voice calls in previously quiet public spaces such as buses or restaurants but also the aural irruptions of customised polyphonic ringtones identifying whose phone is ringing by the tune downloaded. The mobile phone plays an important role in contemporary visual and material culture as fashion item and status symbol. Most tragically one might point to the tableau of people in the twin towers of the World Trade Centre, or aboard a plane about to crash, calling their loved ones to say good-bye (Galvin). By contrast, one can look on at the bathos of Australian cricketer Shane Warne’s predilection for pressing his mobile phone into service to arrange wanted and unwanted assignations while on tour. In this article, I wish to consider another important and so far also under-theorised aspect of mobile phones: text. Of contemporary textual and semiotic systems, mobile text is only a recent addition. Yet it is already produces millions of inscriptions each day, and promises to be of far-reaching significance. Txt Txt msg ws an acidnt. no 1 expcted it. Whn the 1st txt msg ws sent, in 1993 by Nokia eng stdnt Riku Pihkonen, the telcom cpnies thought it ws nt important. SMS – Short Message Service – ws nt considrd a majr pt of GSM. Like mny teks, the *pwr* of txt — indeed, the *pwr* of the fon — wz discvrd by users. In the case of txt mssng, the usrs were the yng or poor in the W and E. (Agar 105) As Jon Agar suggests in Constant Touch, textual communication through mobile phone was an after-thought. Mobile phones use radio waves, operating on a cellular system. The first such mobile service went live in Chicago in December 1978, in Sweden in 1981, in January 1985 in the United Kingdom (Agar), and in the mid-1980s in Australia. Mobile cellular systems allowed efficient sharing of scarce spectrum, improvements in handsets and quality, drawing on advances in science and engineering. In the first instance, technology designers, manufacturers, and mobile phone companies had been preoccupied with transferring telephone capabilities and culture to the mobile phone platform. With the growth in data communications from the 1960s onwards, consideration had been given to data capabilities of mobile phone. One difficulty, however, had been the poor quality and slow transfer rates of data communications over mobile networks, especially with first-generation analogue and early second-generation digital mobile phones. As the internet was widely and wildly adopted in the early to mid-1990s, mobile phone proponents looked at mimicking internet and online data services possibilities on their hand-held devices. What could work on a computer screen, it was thought, could be reinvented in miniature for the mobile phone — and hence much money was invested into the wireless access protocol (or WAP), which spectacularly flopped. The future of mobiles as a material support for text culture was not to lie, at first at least, in aping the world-wide web for the phone. It came from an unexpected direction: cheap, simple letters, spelling out short messages with strange new ellipses. SMS was built into the European Global System for Mobile (GSM) standard as an insignificant, additional capability. A number of telecommunications manufacturers thought so little of the SMS as not to not design or even offer the equipment needed (the servers, for instance) for the distribution of the messages. The character sets were limited, the keyboards small, the typeface displays rudimentary, and there was no acknowledgement that messages were actually received by the recipient. Yet SMS was cheap, and it offered one-to-one, or one-to-many, text communications that could be read at leisure, or more often, immediately. SMS was avidly taken up by young people, forming a new culture of media use. Sending a text message offered a relatively cheap and affordable alternative to the still expensive timed calls of voice mobile. In its early beginnings, mobile text can be seen as a subcultural activity. The text culture featured compressed, cryptic messages, with users devising their own abbreviations and grammar. One of the reasons young people took to texting was a tactic of consolidating and shaping their own shared culture, in distinction from the general culture dominated by their parents and other adults. Mobile texting become involved in a wider reworking of youth culture, involving other new media forms and technologies, and cultural developments (Butcher and Thomas). Another subculture that also was in the vanguard of SMS was the Deaf ‘community’. Though the Alexander Graham Bell, celebrated as the inventor of the telephone, very much had his hearing-impaired wife in mind in devising a new form of communication, Deaf people have been systematically left off the telecommunications network since this time. Deaf people pioneered an earlier form of text communications based on the Baudot standard, used for telex communications. Known as teletypewriter (TTY), or telecommunications device for the Deaf (TDD) in the US, this technology allowed Deaf people to communicate with each other by connecting such devices to the phone network. The addition of a relay service (established in Australia in the mid-1990s after much government resistance) allows Deaf people to communicate with hearing people without TTYs (Goggin & Newell). Connecting TTYs to mobile phones have been a vexed issue, however, because the digital phone network in Australia does not allow compatibility. For this reason, and because of other features, Deaf people have become avid users of SMS (Harper). An especially favoured device in Europe has been the Nokia Communicator, with its hinged keyboard. The move from a ‘restricted’, ‘subcultural’ economy to a ‘general’ economy sees mobile texting become incorporated in the semiotic texture and prosaic practices of everyday life. Many users were already familiar with the new conventions already developed around electronic mail, with shorter, crisper messages sent and received — more conversation-like than other correspondence. Unlike phone calls, email is asynchronous. The sender can respond immediately, and the reply will be received with seconds. However, they can also choose to reply at their leisure. Similarly, for the adept user, SMS offers considerable advantages over voice communications, because it makes textual production mobile. Writing and reading can take place wherever a mobile phone can be turned on: in the street, on the train, in the club, in the lecture theatre, in bed. The body writes differently too. Writing with a pen takes a finger and thumb. Typing on a keyboard requires between two and ten fingers. The mobile phone uses the ‘fifth finger’ — the thumb. Always too early, and too late, to speculate on contemporary culture (Morris), it is worth analyzing the textuality of mobile text. Theorists of media, especially television, have insisted on understanding the specific textual modes of different cultural forms. We are familiar with this imperative, and other methods of making visible and decentring structures of text, and the institutions which animate and frame them (whether author or producer; reader or audience; the cultural expectations encoded in genre; the inscriptions in technology). In formal terms, mobile text can be described as involving elision, great compression, and open-endedness. Its channels of communication physically constrain the composition of a very long single text message. Imagine sending James Joyce’s Finnegan’s Wake in one text message. How long would it take to key in this exemplar of the disintegration of the cultural form of the novel? How long would it take to read? How would one navigate the text? Imagine sending the Courier-Mail or Financial Review newspaper over a series of text messages? The concept of the ‘news’, with all its cultural baggage, is being reconfigured by mobile text — more along the lines of the older technology of the telegraph, perhaps: a few words suffices to signify what is important. Mobile textuality, then, involves a radical fragmentation and unpredictable seriality of text lexia (Barthes). Sometimes a mobile text looks singular: saying ‘yes’ or ‘no’, or sending your name and ID number to obtain your high school or university results. Yet, like a telephone conversation, or any text perhaps, its structure is always predicated upon, and haunted by, the other. Its imagined reader always has a mobile phone too, little time, no fixed address (except that hailed by the network’s radio transmitter), and a finger poised to respond. Mobile text has structure and channels. Yet, like all text, our reading and writing of it reworks those fixities and makes destabilizes our ‘clear’ communication. After all, mobile textuality has a set of new pre-conditions and fragilities. It introduces new sorts of ‘noise’ to signal problems to annoy those theorists cleaving to the Shannon and Weaver linear model of communication; signals often drop out; there is a network confirmation (and message displayed) that text messages have been sent, but no system guarantee that they have been received. Our friend or service provider might text us back, but how do we know that they got our text message? Commodity We are familiar now with the pleasures of mobile text, the smile of alerting a friend to our arrival, celebrating good news, jilting a lover, making a threat, firing a worker, flirting and picking-up. Text culture has a new vector of mobility, invented by its users, but now coveted and commodified by businesses who did not see it coming in the first place. Nimble in its keystrokes, rich in expressivity and cultural invention, but relatively rudimentary in its technical characteristics, mobile text culture has finally registered in the boardrooms of communications companies. Not only is SMS the preferred medium of mobile phone users to keep in touch with each other, SMS has insinuated itself into previously separate communication industries arenas. In 2002-2003 SMS became firmly established in television broadcasting. Finally, interactive television had arrived after many years of prototyping and being heralded. The keenly awaited back-channel for television arrives courtesy not of cable or satellite television, nor an extra fixed-phone line. It’s the mobile phone, stupid! Big Brother was not only a watershed in reality television, but also in convergent media. Less obvious perhaps than supplementary viewing, or biographies, or chat on Big Brother websites around the world was the use of SMS for voting. SMS is now routinely used by mainstream television channels for viewer feedback, contest entry, and program information. As well as its widespread deployment in broadcasting, mobile text culture has been the language of prosaic, everyday transactions. Slipping into a café at Bronte Beach in Sydney, why not pay your parking meter via SMS? You’ll even receive a warning when your time is up. The mobile is becoming the ‘electronic purse’, with SMS providing its syntax and sentences. The belated ingenuity of those fascinated by the economics of mobile text has also coincided with a technological reworking of its possibilities, with new implications for its semiotic possibilities. Multimedia messaging (MMS) has now been deployed, on capable digital phones (an instance of what has been called 2.5 generation [G] digital phones) and third-generation networks. MMS allows images, video, and audio to be communicated. At one level, this sort of capability can be user-generated, as in the popularity of mobiles that take pictures and send these to other users. Television broadcasters are also interested in the capability to send video clips of favourite programs to viewers. Not content with the revenues raised from millions of standard-priced SMS, and now MMS transactions, commercial participants along the value chain are keenly awaiting the deployment of what is called ‘premium rate’ SMS and MMS services. These services will involve the delivery of desirable content via SMS and MMS, and be priced at a premium. Products and services are likely to include: one-to-one textchat; subscription services (content delivered on handset); multi-party text chat (such as chat rooms); adult entertainment services; multi-part messages (such as text communications plus downloads); download of video or ringtones. In August 2003, one text-chat service charged $4.40 for a pair of SMS. Pwr At the end of 2003, we have scarcely registered the textual practices and systems in mobile text, a culture that sprang up in the interstices of telecommunications. It may be urgent that we do think about the stakes here, as SMS is being extended and commodified. There are obvious and serious policy issues in premium rate SMS and MMS services, and questions concerning the political economy in which these are embedded. Yet there are cultural questions too, with intricate ramifications. How do we understand the effects of mobile textuality, rewriting the telephone book for this new cultural form (Ronell). What are the new genres emerging? And what are the implications for cultural practice and policy? Does it matter, for instance, that new MMS and 3rd generation mobile platforms are not being designed or offered with any-to-any capabilities in mind: allowing any user to upload and send multimedia communications to other any. True, as the example of SMS shows, the inventiveness of users is difficult to foresee and predict, and so new forms of mobile text may have all sorts of relationships with content and communication. However, there are worrying signs of these developing mobile circuits being programmed for narrow channels of retail purchase of cultural products rather than open-source, open-architecture, publicly usable nodes of connection. Works Cited Agar, Jon. Constant Touch: A Global History of the Mobile Phone. Cambridge: Icon, 2003. Barthes, Roland. S/Z. Trans. Richard Miller. New York: Hill & Wang, 1974. Brown, Barry, Green, Nicola, and Harper, Richard, eds. Wireless World: Social, Cultural, and Interactional Aspects of the Mobile Age. London: Springer Verlag, 2001. Butcher, Melissa, and Thomas, Mandy, eds. Ingenious: Emerging youth cultures in urban Australia. Melbourne: Pluto, 2003. Galvin, Michael. ‘September 11 and the Logistics of Communication.’ Continuum: Journal of Media and Cultural Studies 17.3 (2003): 303-13. Goggin, Gerard, and Newell, Christopher. Digital Disability: The Social Construction of Digital in New Media. Lanham, MA: Rowman & Littlefield, 2003. Harper, Phil. ‘Networking the Deaf Nation.’ Australian Journal of Communication 30. 3 (2003), in press. International Telecommunications Union (ITU). ‘Mobile Cellular, subscribers per 100 people.’ World Telecommunication Indicators <http://www.itu.int/ITU-D/ict/statistics/> accessed 13 October 2003. Katz, James E., and Aakhus, Mark, eds. Perpetual Contact: Mobile Communication, Private Talk, Public Performance. Cambridge: Cambridge U P, 2002. Morris, Meaghan. Too Soon, Too Late: History in Popular Culture. Bloomington and Indianapolis: U of Indiana P, 1998. Plant, Sadie. On the Mobile: The Effects of Mobile Telephones on Social and Individual Life. < http://www.motorola.com/mot/documents/0,1028,296,00.pdf> accessed 5 October 2003. Ronell, Avital. The Telephone Book: Technology—schizophrenia—electric speech. Lincoln: U of Nebraska P, 1989. Citation reference for this article MLA Style Goggin, Gerard. "‘mobile text’" M/C: A Journal of Media and Culture <http://www.media-culture.org.au/0401/03-goggin.php>. APA Style Goggin, G. (2004, Jan 12). ‘mobile text’. M/C: A Journal of Media and Culture, 7, <http://www.media-culture.org.au/0401/03-goggin.php>

APA, Harvard, Vancouver, ISO, and other styles

40

Livingstone,RandallM. "Let’s Leave the Bias to the Mainstream Media: A Wikipedia Community Fighting for Information Neutrality." M/C Journal 13, no.6 (November23, 2010). http://dx.doi.org/10.5204/mcj.315.

Full text

Abstract:

Although I'm a rich white guy, I'm also a feminist anti-racism activist who fights for the rights of the poor and oppressed. (Carl Kenner)Systemic bias is a scourge to the pillar of neutrality. (Cerejota)Count me in. Let's leave the bias to the mainstream media. (Orcar967)Because this is so important. (CuttingEdge)These are a handful of comments posted by online editors who have banded together in a virtual coalition to combat Western bias on the world’s largest digital encyclopedia, Wikipedia. This collective action by Wikipedians both acknowledges the inherent inequalities of a user-controlled information project like Wikpedia and highlights the potential for progressive change within that same project. These community members are taking the responsibility of social change into their own hands (or more aptly, their own keyboards).In recent years much research has emerged on Wikipedia from varying fields, ranging from computer science, to business and information systems, to the social sciences. While critical at times of Wikipedia’s growth, governance, and influence, most of this work observes with optimism that barriers to improvement are not firmly structural, but rather they are socially constructed, leaving open the possibility of important and lasting change for the better.WikiProject: Countering Systemic Bias (WP:CSB) considers one such collective effort. Close to 350 editors have signed on to the project, which began in 2004 and itself emerged from a similar project named CROSSBOW, or the “Committee Regarding Overcoming Serious Systemic Bias on Wikipedia.” As a WikiProject, the term used for a loose group of editors who collaborate around a particular topic, these editors work within the Wikipedia site and collectively create a social network that is unified around one central aim—representing the un- and underrepresented—and yet they are bound by no particular unified set of interests. The first stage of a multi-method study, this paper looks at a snapshot of WP:CSB’s activity from both content analysis and social network perspectives to discover “who” geographically this coalition of the unrepresented is inserting into the digital annals of Wikipedia.Wikipedia and WikipediansDeveloped in 2001 by Internet entrepreneur Jimmy Wales and academic Larry Sanger, Wikipedia is an online collaborative encyclopedia hosting articles in nearly 250 languages (Cohen). The English-language Wikipedia contains over 3.2 million articles, each of which is created, edited, and updated solely by users (Wikipedia “Welcome”). At the time of this study, Alexa, a website tracking organisation, ranked Wikipedia as the 6th most accessed site on the Internet. Unlike the five sites ahead of it though—Google, Facebook, Yahoo, YouTube (owned by Google), and live.com (owned by Microsoft)—all of which are multibillion-dollar businesses that deal more with information aggregation than information production, Wikipedia is a non-profit that operates on less than $500,000 a year and staffs only a dozen paid employees (Lih). Wikipedia is financed and supported by the WikiMedia Foundation, a charitable umbrella organisation with an annual budget of $4.6 million, mainly funded by donations (Middleton).Wikipedia editors and contributors have the option of creating a user profile and participating via a username, or they may participate anonymously, with only an IP address representing their actions. Despite the option for total anonymity, many Wikipedians have chosen to visibly engage in this online community (Ayers, Matthews, and Yates; Bruns; Lih), and researchers across disciplines are studying the motivations of these new online collectives (Kane, Majchrzak, Johnson, and Chenisern; Oreg and Nov). The motivations of open source software contributors, such as UNIX programmers and programming groups, have been shown to be complex and tied to both extrinsic and intrinsic rewards, including online reputation, self-satisfaction and enjoyment, and obligation to a greater common good (Hertel, Niedner, and Herrmann; Osterloh and Rota). Investigation into why Wikipedians edit has indicated multiple motivations as well, with community engagement, task enjoyment, and information sharing among the most significant (Schroer and Hertel). Additionally, Wikipedians seem to be taking up the cause of generativity (a concern for the ongoing health and openness of the Internet’s infrastructures) that Jonathan Zittrain notably called for in The Future of the Internet and How to Stop It. Governance and ControlAlthough the technical infrastructure of Wikipedia is built to support and perhaps encourage an equal distribution of power on the site, Wikipedia is not a land of “anything goes.” The popular press has covered recent efforts by the site to reduce vandalism through a layer of editorial review (Cohen), a tightening of control cited as a possible reason for the recent dip in the number of active editors (Edwards). A number of regulations are already in place that prevent the open editing of certain articles and pages, such as the site’s disclaimers and pages that have suffered large amounts of vandalism. Editing wars can also cause temporary restrictions to editing, and Ayers, Matthews, and Yates point out that these wars can happen anywhere, even to Burt Reynold’s page.Academic studies have begun to explore the governance and control that has developed in the Wikipedia community, generally highlighting how order is maintained not through particular actors, but through established procedures and norms. Konieczny tested whether Wikipedia’s evolution can be defined by Michels’ Iron Law of Oligopoly, which predicts that the everyday operations of any organisation cannot be run by a mass of members, and ultimately control falls into the hands of the few. Through exploring a particular WikiProject on information validation, he concludes:There are few indicators of an oligarchy having power on Wikipedia, and few trends of a change in this situation. The high level of empowerment of individual Wikipedia editors with regard to policy making, the ease of communication, and the high dedication to ideals of contributors succeed in making Wikipedia an atypical organization, quite resilient to the Iron Law. (189)Butler, Joyce, and Pike support this assertion, though they emphasise that instead of oligarchy, control becomes encapsulated in a wide variety of structures, policies, and procedures that guide involvement with the site. A virtual “bureaucracy” emerges, but one that should not be viewed with the negative connotation often associated with the term.Other work considers control on Wikipedia through the framework of commons governance, where “peer production depends on individual action that is self-selected and decentralized rather than hierarchically assigned. Individuals make their own choices with regard to resources managed as a commons” (Viegas, Wattenberg and McKeon). The need for quality standards and quality control largely dictate this commons governance, though interviewing Wikipedians with various levels of responsibility revealed that policies and procedures are only as good as those who maintain them. Forte, Larco, and Bruckman argue “the Wikipedia community has remained healthy in large part due to the continued presence of ‘old-timers’ who carry a set of social norms and organizational ideals with them into every WikiProject, committee, and local process in which they take part” (71). Thus governance on Wikipedia is a strong representation of a democratic ideal, where actors and policies are closely tied in their evolution. Transparency, Content, and BiasThe issue of transparency has proved to be a double-edged sword for Wikipedia and Wikipedians. The goal of a collective body of knowledge created by all—the “expert” and the “amateur”—can only be upheld if equal access to page creation and development is allotted to everyone, including those who prefer anonymity. And yet this very option for anonymity, or even worse, false identities, has been a sore subject for some in the Wikipedia community as well as a source of concern for some scholars (Santana and Wood). The case of a 24-year old college dropout who represented himself as a multiple Ph.D.-holding theology scholar and edited over 16,000 articles brought these issues into the public spotlight in 2007 (Doran; Elsworth). Wikipedia itself has set up standards for content that include expectations of a neutral point of view, verifiability of information, and the publishing of no original research, but Santana and Wood argue that self-policing of these policies is not adequate:The principle of managerial discretion requires that every actor act from a sense of duty to exercise moral autonomy and choice in responsible ways. When Wikipedia’s editors and administrators remain anonymous, this criterion is simply not met. It is assumed that everyone is behaving responsibly within the Wikipedia system, but there are no monitoring or control mechanisms to make sure that this is so, and there is ample evidence that it is not so. (141) At the theoretical level, some downplay these concerns of transparency and autonomy as logistical issues in lieu of the potential for information systems to support rational discourse and emancipatory forms of communication (Hansen, Berente, and Lyytinen), but others worry that the questionable “realities” created on Wikipedia will become truths once circulated to all areas of the Web (Langlois and Elmer). With the number of articles on the English-language version of Wikipedia reaching well into the millions, the task of mapping and assessing content has become a tremendous endeavour, one mostly taken on by information systems experts. Kittur, Chi, and Suh have used Wikipedia’s existing hierarchical categorisation structure to map change in the site’s content over the past few years. Their work revealed that in early 2008 “Culture and the arts” was the most dominant category of content on Wikipedia, representing nearly 30% of total content. People (15%) and geographical locations (14%) represent the next largest categories, while the natural and physical sciences showed the greatest increase in volume between 2006 and 2008 (+213%D, with “Culture and the arts” close behind at +210%D). This data may indicate that contributing to Wikipedia, and thus spreading knowledge, is growing amongst the academic community while maintaining its importance to the greater popular culture-minded community. Further work by Kittur and Kraut has explored the collaborative process of content creation, finding that too many editors on a particular page can reduce the quality of content, even when a project is well coordinated.Bias in Wikipedia content is a generally acknowledged and somewhat conflicted subject (Giles; Johnson; McHenry). The Wikipedia community has created numerous articles and pages within the site to define and discuss the problem. Citing a survey conducted by the University of Würzburg, Germany, the “Wikipedia:Systemic bias” page describes the average Wikipedian as:MaleTechnically inclinedFormally educatedAn English speakerWhiteAged 15-49From a majority Christian countryFrom a developed nationFrom the Northern HemisphereLikely a white-collar worker or studentBias in content is thought to be perpetuated by this demographic of contributor, and the “founder effect,” a concept from genetics, linking the original contributors to this same demographic has been used to explain the origins of certain biases. Wikipedia’s “About” page discusses the issue as well, in the context of the open platform’s strengths and weaknesses:in practice editing will be performed by a certain demographic (younger rather than older, male rather than female, rich enough to afford a computer rather than poor, etc.) and may, therefore, show some bias. Some topics may not be covered well, while others may be covered in great depth. No educated arguments against this inherent bias have been advanced.Royal and Kapila’s study of Wikipedia content tested some of these assertions, finding identifiable bias in both their purposive and random sampling. They conclude that bias favoring larger countries is positively correlated with the size of the country’s Internet population, and corporations with larger revenues work in much the same way, garnering more coverage on the site. The researchers remind us that Wikipedia is “more a socially produced document than a value-free information source” (Royal & Kapila).WikiProject: Countering Systemic BiasAs a coalition of current Wikipedia editors, the WikiProject: Countering Systemic Bias (WP:CSB) attempts to counter trends in content production and points of view deemed harmful to the democratic ideals of a valueless, open online encyclopedia. WP:CBS’s mission is not one of policing the site, but rather deepening it:Generally, this project concentrates upon remedying omissions (entire topics, or particular sub-topics in extant articles) rather than on either (1) protesting inappropriate inclusions, or (2) trying to remedy issues of how material is presented. Thus, the first question is "What haven't we covered yet?", rather than "how should we change the existing coverage?" (Wikipedia, “Countering”)The project lays out a number of content areas lacking adequate representation, geographically highlighting the dearth in coverage of Africa, Latin America, Asia, and parts of Eastern Europe. WP:CSB also includes a “members” page that editors can sign to show their support, along with space to voice their opinions on the problem of bias on Wikipedia (the quotations at the beginning of this paper are taken from this “members” page). At the time of this study, 329 editors had self-selected and self-identified as members of WP:CSB, and this group constitutes the population sample for the current study. To explore the extent to which WP:CSB addressed these self-identified areas for improvement, each editor’s last 50 edits were coded for their primary geographical country of interest, as well as the conceptual category of the page itself (“P” for person/people, “L” for location, “I” for idea/concept, “T” for object/thing, or “NA” for indeterminate). For example, edits to the Wikipedia page for a single person like Tony Abbott (Australian federal opposition leader) were coded “Australia, P”, while an edit for a group of people like the Manchester United football team would be coded “England, P”. Coding was based on information obtained from the header paragraphs of each article’s Wikipedia page. After coding was completed, corresponding information on each country’s associated continent was added to the dataset, based on the United Nations Statistics Division listing.A total of 15,616 edits were coded for the study. Nearly 32% (n = 4962) of these edits were on articles for persons or people (see Table 1 for complete coding results). From within this sub-sample of edits, a majority of the people (68.67%) represented are associated with North America and Europe (Figure A). If we break these statistics down further, nearly half of WP:CSB’s edits concerning people were associated with the United States (36.11%) and England (10.16%), with India (3.65%) and Australia (3.35%) following at a distance. These figures make sense for the English-language Wikipedia; over 95% of the population in the three Westernised countries speak English, and while India is still often regarded as a developing nation, its colonial British roots and the emergence of a market economy with large, technology-driven cities are logical explanations for its representation here (and some estimates make India the largest English-speaking nation by population on the globe today).Table A Coding Results Total Edits 15616 (I) Ideas 2881 18.45% (L) Location 2240 14.34% NA 333 2.13% (T) Thing 5200 33.30% (P) People 4962 31.78% People by Continent Africa 315 6.35% Asia 827 16.67% Australia 175 3.53% Europe 1411 28.44% NA 110 2.22% North America 1996 40.23% South America 128 2.58% The areas of the globe of main concern to WP:CSB proved to be much less represented by the coalition itself. Asia, far and away the most populous continent with more than 60% of the globe’s people (GeoHive), was represented in only 16.67% of edits. Africa (6.35%) and South America (2.58%) were equally underrepresented compared to both their real-world populations (15% and 9% of the globe’s population respectively) and the aforementioned dominance of the advanced Westernised areas. However, while these percentages may seem low, in aggregate they do meet the quota set on the WP:CSB Project Page calling for one out of every twenty edits to be “a subject that is systematically biased against the pages of your natural interests.” By this standard, the coalition is indeed making headway in adding content that strategically counterbalances the natural biases of Wikipedia’s average editor.Figure ASocial network analysis allows us to visualise multifaceted data in order to identify relationships between actors and content (Vego-Redondo; Watts). Similar to Davis’s well-known sociological study of Southern American socialites in the 1930s (Scott), our Wikipedia coalition can be conceptualised as individual actors united by common interests, and a network of relations can be constructed with software such as UCINET. A mapping algorithm that considers both the relationship between all sets of actors and each actor to the overall collective structure produces an image of our network. This initial network is bimodal, as both our Wikipedia editors and their edits (again, coded for country of interest) are displayed as nodes (Figure B). Edge-lines between nodes represents a relationship, and here that relationship is the act of editing a Wikipedia article. We see from our network that the “U.S.” and “England” hold central positions in the network, with a mass of editors crowding around them. A perimeter of nations is then held in place by their ties to editors through the U.S. and England, with a second layer of editors and poorly represented nations (Gabon, Laos, Uzbekistan, etc.) around the boundaries of the network.Figure BWe are reminded from this visualisation both of the centrality of the two Western powers even among WP:CSB editoss, and of the peripheral nature of most other nations in the world. But we also learn which editors in the project are contributing most to underrepresented areas, and which are less “tied” to the Western core. Here we see “Wizzy” and “Warofdreams” among the second layer of editors who act as a bridge between the core and the periphery; these are editors with interests in both the Western and marginalised nations. Located along the outer edge, “Gallador” and “Gerrit” have no direct ties to the U.S. or England, concentrating all of their edits on less represented areas of the globe. Identifying editors at these key positions in the network will help with future research, informing interview questions that will investigate their interests further, but more significantly, probing motives for participation and action within the coalition.Additionally, we can break the network down further to discover editors who appear to have similar interests in underrepresented areas. Figure C strips down the network to only editors and edits dealing with Africa and South America, the least represented continents. From this we can easily find three types of editors again: those who have singular interests in particular nations (the outermost layer of editors), those who have interests in a particular region (the second layer moving inward), and those who have interests in both of these underrepresented regions (the center layer in the figure). This last group of editors may prove to be the most crucial to understand, as they are carrying the full load of WP:CSB’s mission.Figure CThe End of Geography, or the Reclamation?In The Internet Galaxy, Manuel Castells writes that “the Internet Age has been hailed as the end of geography,” a bold suggestion, but one that has gained traction over the last 15 years as the excitement for the possibilities offered by information communication technologies has often overshadowed structural barriers to participation like the Digital Divide (207). Castells goes on to amend the “end of geography” thesis by showing how global information flows and regional Internet access rates, while creating a new “map” of the world in many ways, is still closely tied to power structures in the analog world. The Internet Age: “redefines distance but does not cancel geography” (207). The work of WikiProject: Countering Systemic Bias emphasises the importance of place and representation in the information environment that continues to be constructed in the online world. This study looked at only a small portion of this coalition’s efforts (~16,000 edits)—a snapshot of their labor frozen in time—which itself is only a minute portion of the information being dispatched through Wikipedia on a daily basis (~125,000 edits). Further analysis of WP:CSB’s work over time, as well as qualitative research into the identities, interests and motivations of this collective, is needed to understand more fully how information bias is understood and challenged in the Internet galaxy. The data here indicates this is a fight worth fighting for at least a growing few.ReferencesAlexa. “Top Sites.” Alexa.com, n.d. 10 Mar. 2010 ‹http://www.alexa.com/topsites>. Ayers, Phoebe, Charles Matthews, and Ben Yates. How Wikipedia Works: And How You Can Be a Part of It. San Francisco, CA: No Starch, 2008.Bruns, Axel. Blogs, Wikipedia, Second Life, and Beyond: From Production to Produsage. New York: Peter Lang, 2008.Butler, Brian, Elisabeth Joyce, and Jacqueline Pike. Don’t Look Now, But We’ve Created a Bureaucracy: The Nature and Roles of Policies and Rules in Wikipedia. Paper presented at 2008 CHI Annual Conference, Florence.Castells, Manuel. The Internet Galaxy: Reflections on the Internet, Business, and Society. Oxford: Oxford UP, 2001.Cohen, Noam. “Wikipedia.” New York Times, n.d. 12 Mar. 2010 ‹http://www.nytimes.com/info/wikipedia/>. Doran, James. “Wikipedia Chief Promises Change after ‘Expert’ Exposed as Fraud.” The Times, 6 Mar. 2007 ‹http://technology.timesonline.co.uk/tol/news/tech_and_web/article1480012.ece>. Edwards, Lin. “Report Claims Wikipedia Losing Editors in Droves.” Physorg.com, 30 Nov 2009. 12 Feb. 2010 ‹http://www.physorg.com/news178787309.html>. Elsworth, Catherine. “Fake Wikipedia Prof Altered 20,000 Entries.” London Telegraph, 6 Mar. 2007 ‹http://www.telegraph.co.uk/news/1544737/Fake-Wikipedia-prof-altered-20000-entries.html>. Forte, Andrea, Vanessa Larco, and Amy Bruckman. “Decentralization in Wikipedia Governance.” Journal of Management Information Systems 26 (2009): 49-72.Giles, Jim. “Internet Encyclopedias Go Head to Head.” Nature 438 (2005): 900-901.Hansen, Sean, Nicholas Berente, and Kalle Lyytinen. “Wikipedia, Critical Social Theory, and the Possibility of Rational Discourse.” The Information Society 25 (2009): 38-59.Hertel, Guido, Sven Niedner, and Stefanie Herrmann. “Motivation of Software Developers in Open Source Projects: An Internet-Based Survey of Contributors to the Linex Kernel.” Research Policy 32 (2003): 1159-1177.Johnson, Bobbie. “Rightwing Website Challenges ‘Liberal Bias’ of Wikipedia.” The Guardian, 1 Mar. 2007. 8 Mar. 2010 ‹http://www.guardian.co.uk/technology/2007/mar/01/wikipedia.news>. Kane, Gerald C., Ann Majchrzak, Jeremaih Johnson, and Lily Chenisern. A Longitudinal Model of Perspective Making and Perspective Taking within Fluid Online Collectives. Paper presented at the 2009 International Conference on Information Systems, Phoenix, AZ, 2009.Kittur, Aniket, Ed H. Chi, and Bongwon Suh. What’s in Wikipedia? Mapping Topics and Conflict Using Socially Annotated Category Structure. Paper presented at the 2009 CHI Annual Conference, Boston, MA.———, and Robert E. Kraut. Harnessing the Wisdom of Crowds in Wikipedia: Quality through Collaboration. Paper presented at the 2008 Association for Computing Machinery’s Computer Supported Cooperative Work Annual Conference, San Diego, CA.Konieczny, Piotr. “Governance, Organization, and Democracy on the Internet: The Iron Law and the Evolution of Wikipedia.” Sociological Forum 24 (2009): 162-191.———. “Wikipedia: Community or Social Movement?” Interface: A Journal for and about Social Movements 1 (2009): 212-232.Langlois, Ganaele, and Greg Elmer. “Wikipedia Leeches? The Promotion of Traffic through a Collaborative Web Format.” New Media & Society 11 (2009): 773-794.Lih, Andrew. The Wikipedia Revolution. New York, NY: Hyperion, 2009.McHenry, Robert. “The Real Bias in Wikipedia: A Response to David Shariatmadari.” OpenDemocracy.com 2006. 8 Mar. 2010 ‹http://www.opendemocracy.net/media-edemocracy/wikipedia_bias_3621.jsp>. Middleton, Chris. “The World of Wikinomics.” Computer Weekly, 20 Jan. 2009: 22-26.Oreg, Shaul, and Oded Nov. “Exploring Motivations for Contributing to Open Source Initiatives: The Roles of Contribution, Context and Personal Values.” Computers in Human Behavior 24 (2008): 2055-2073.Osterloh, Margit and Sandra Rota. “Trust and Community in Open Source Software Production.” Analyse & Kritik 26 (2004): 279-301.Royal, Cindy, and Deepina Kapila. “What’s on Wikipedia, and What’s Not…?: Assessing Completeness of Information.” Social Science Computer Review 27 (2008): 138-148.Santana, Adele, and Donna J. Wood. “Transparency and Social Responsibility Issues for Wikipedia.” Ethics of Information Technology 11 (2009): 133-144.Schroer, Joachim, and Guido Hertel. “Voluntary Engagement in an Open Web-Based Encyclopedia: Wikipedians and Why They Do It.” Media Psychology 12 (2009): 96-120.Scott, John. Social Network Analysis. London: Sage, 1991.Vego-Redondo, Fernando. Complex Social Networks. Cambridge: Cambridge UP, 2007.Viegas, Fernanda B., Martin Wattenberg, and Matthew M. McKeon. “The Hidden Order of Wikipedia.” Online Communities and Social Computing (2007): 445-454.Watts, Duncan. Six Degrees: The Science of a Connected Age. New York, NY: W. W. Norton & Company, 2003Wikipedia. “About.” n.d. 8 Mar. 2010 ‹http://en.wikipedia.org/wiki/Wikipedia:About>. ———. “Welcome to Wikipedia.” n.d. 8 Mar. 2010 ‹http://en.wikipedia.org/wiki/Main_Page>.———. “Wikiproject:Countering Systemic Bias.” n.d. 12 Feb. 2010 ‹http://en.wikipedia.org/wiki/Wikipedia:WikiProject_Countering_systemic_bias#Members>. Zittrain, Jonathan. The Future of the Internet and How to Stop It. New Haven, CT: Yale UP, 2008.

APA, Harvard, Vancouver, ISO, and other styles

41

Campanioni, Chris. "How Bizarre: The Glitch of the Nineties as a Fantasy of New Authorship." M/C Journal 21, no.5 (December6, 2018). http://dx.doi.org/10.5204/mcj.1463.

Full text

Abstract:

As the ball dropped on 1999, is it any wonder that No Doubt played, “It’s the End of the World as We Know It” by R.E.M. live on MTV? Any discussion of the Nineties—and its pinnacle moment, Y2K—requires a discussion of both the cover and the glitch, two performative and technological enactments that fomented the collapse between author-reader and user-machine that has, twenty years later, become normalised in today’s Post Internet culture. By staging failure and inviting the audience to participate, the glitch and the cover call into question the original and the origin story. This breakdown of normative borders has prompted the convergence of previously demarcated media, genres, and cultures, a constellation from which to recognise a stochastic hybrid form. The Cover as a Revelation of Collaborative MurmurBefore Sean Parker collaborated with Shawn Fanning to launch Napster on 1 June 1999, networked file distribution existed as cumbersome text-based programs like Internet Relay Chat and Usenet, servers which resembled bulletin boards comprising multiple categories of digitally ripped files. Napster’s simple interface, its advanced search filters, and its focus on music and audio files fostered a peer-to-peer network that became the fastest growing website in history, registering 80 million users in less than two years.In harnessing the transgressive power of the Internet to force a new mode of content sharing, Napster forced traditional providers to rethink what constitutes “content” at a moment which prefigures our current phenomena of “produsage” (Bruns) and the vast popularity of user-generated content. At stake is not just the democratisation of art but troubling the very idea of intellectual property, which is to say, the very concept of ownership.Long before the Internet was re-routed from military servers and then mainstreamed, Michel Foucault understood the efficacy of anonymous interactions on the level of literature, imagining a culture where discourse would circulate without any need for an author. But what he was asking in 1969 is something we can better answer today, because it seems less germane to call into question the need for an author in a culture in which everyone is writing, producing, and reproducing text, and more effective to think about re-evaluating the notion of a single author, or what it means to write by yourself. One would have to testify to the particular medium we have at our disposal, the Internet’s ultimate permissibility, its provocations for collaboration and co-creation. One would have to surrender the idea that authors own anything besides our will to keep producing, and our desire for change; and to modulate means to resist without negating, to alter without omitting, to enable something new to come forward; the unfolding of the text into the anonymity of a murmur.We should remind ourselves that “to author” all the way down to its Latin roots signifies advising, witnessing, and transferring. We should be reminded that to author something means to forget the act of saying “I,” to forget it or to make it recede in the background in service of the other or others, on behalf of a community. The de-centralisation of Web development and programming initiated by Napster inform a poetics of relation, an always-open structure in which, as Édouard Glissant said, “the creator of a text is effaced, or rather, is done away with, to be revealed in the texture of his creation” (25). When a solid melts, it reveals something always underneath, something at the bottom, something inside—something new and something that was always already there. A cover, too, is both a revival and a reworking, an update and an interpretation, a retrospective tribute and a re-version that looks toward the future. In performing the new, the original as singular is called into question, replaced by an increasingly fetishised copy made up of and made by multiples.Authorial Effacement and the Exigency of the ErrorY2K, otherwise known as the Millennium Bug, was a coding problem, an abbreviation made to save memory space which would disrupt computers during the transition from 1999 to 2000, when it was feared that the new year would become literally unrecognisable. After an estimated $300 billion in upgraded hardware and software was spent to make computers Y2K-compliant, something more extraordinary than global network collapse occurred as midnight struck: nothing.But what if the machine admits the possibility of accident? Implicit in the admission of any accident is the disclosure of a new condition—something to be heard, to happen, from the Greek ad-cadere, which means to fall. In this drop into non-repetition, the glitch actualises an idea about authorship that necessitates multi-user collaboration; the curtain falls only to reveal the hidden face of technology, which becomes, ultimately, instructions for its re-programming. And even as it deviates, the new form is liable to become mainstreamed into a new fashion. “Glitch’s inherently critical moment(um)” (Menkman 8) indicates this potential for technological self-insurgence, while suggesting the broader cultural collapse of generic markers and hierarchies, and its ensuing flow into authorial fluidity.This feeling of shock, this move “towards the ruins of destructed meaning” (Menkman 29) inherent in any encounter with the glitch, forecasted not the immediate horror of Y2K, but the delayed disasters of 9/11, Hurricane Katrina, Deepwater Horizon Oil Spill, Indian Ocean tsunami, Sichuan Province earthquake, global financial crisis, and two international wars that would all follow within the next nine years. If, as Menkman asserts, the glitch, in representing a loss of self-control “captures the machine revealing itself” (30), what also surfaces is the tipping point that edges us toward a new becoming—not only the inevitability of surrender between machine and user, but their reversibility. Just as crowds stood, transfixed before midnight of the new millennium in anticipation of the error, or its exigency, it’s always the glitch I wait for; it’s always the glitch I aim to re-create, as if on command. The accidental revelation, or the machine breaking through to show us its insides. Like the P2P network that Napster introduced to culture, every glitch produces feedback, a category of noise (Shannon) influencing the machine’s future behaviour whereby potential users might return the transmission.Re-Orienting the Bizarre in Fantasy and FictionIt is in the fantasy of dreams, and their residual leakage into everyday life, evidenced so often in David Lynch’s Twin Peaks, where we can locate a similar authorial agency. The cult Nineties psycho-noir, and its discontinuous return twenty-six years later, provoke us into reconsidering the science of sleep as the art of fiction, assembling an alternative, interactive discourse from found material.The turning in and turning into in dreams is often described as an encounter with the “bizarre,” a word which indicates our lack of understanding about the peculiar processes that normally happen inside our heads. Dreams are inherently and primarily bizarre, Allan J. Hobson argues, because during REM sleep, our noradrenergic and serotonergic systems do not modulate the activated brain, as they do in waking. “The cerebral cortex and hippocampus cannot function in their usual oriented and linear logical way,” Hobson writes, “but instead create odd and remote associations” (71). But is it, in fact, that our dreams are “bizarre” or is it that the model itself is faulty—a precept premised on the normative, its dependency upon generalisation and reducibility—what is bizarre if not the ordinary modulations that occur in everyday life?Recall Foucault’s interest not in what a dream means but what a dream does. How it rematerialises in the waking world and its basis in and effect on imagination. Recall recollection itself, or Erin J. Wamsley’s “Dreaming and Offline Memory Consolidation.” “A ‘function’ for dreaming,” Wamsley writes, “hinges on the difficult question of whether conscious experience in general serves any function” (433). And to think about the dream as a specific mode of experience related to a specific theory of knowledge is to think about a specific form of revelation. It is this revelation, this becoming or coming-to-be, that makes the connection to crowd-sourced content production explicit—dreams serve as an audition or dress rehearsal in which new learning experiences with others are incorporated into the unconscious so that they might be used for production in the waking world. Bert O. States elaborates, linking the function of the dream with the function of the fiction writer “who makes models of the world that carry the imprint and structure of our various concerns. And it does this by using real people, or ‘scraps’ of other people, as the instruments of hypothetical facts” (28). Four out of ten characters in a dream are strangers, according to Calvin Hall, who is himself a stranger, someone I’ve never met in waking life or in a dream. But now that I’ve read him, now that I’ve written him into this work, he seems closer to me. Twin Peak’s serial lesson for viewers is this—even the people who seem strangers to us can interact with and intervene in our processes of production.These are the moments that a beginning takes place. And even if nothing directly follows, this transfer constitutes the hypothesised moment of production, an always-already perhaps, the what-if stimulus of charged possibility; the soil plot, or plot line, for freedom. Twin Peaks is a town in which the bizarre penetrates the everyday so often that eventually, the bizarre is no longer bizarre, but just another encounter with the ordinary. Dream sequences are common, but even more common—and more significant—are the moments in which what might otherwise be a dream vision ruptures into real life; these moments propel the narrative.Exhibit A: A man who hasn’t gone outside in a while begins to crumble, falling to the earth when forced to chase after a young girl, who’s just stolen the secret journal of another young girl, which he, in turn, had stolen.B: A horse appears in the middle of the living room after a routine vacuum cleaning and a subtle barely-there transition, a fade-out into a fade-in, what people call a dissolve. No one notices, or thinks to point out its presence. Or maybe they’re distracted. Or maybe they’ve already forgotten. Dissolve.(I keep hitting “Save As.” As if renaming something can also transform it.)C: All the guests at the Great Northern Hotel begin to dance the tango on cue—a musical, without any music.D: After an accident, a middle-aged woman with an eye patch—she was wearing the eye patch before the accident—believes she’s seventeen again. She enrolls in Twin Peaks High School and joins the cheerleading team.E: A woman pretending to be a Japanese businessman ambles into the town bar to meet her estranged husband, who fails to recognise his cross-dressing, race-swapping wife.F: A girl with blond hair is murdered, only to come back as another girl, with the same face and a different name. And brown hair. They’re cousins.G: After taking over her dead best friend’s Meals on Wheels route, Donna Hayward walks in to meet a boy wearing a tuxedo, sitting on the couch with his fingers clasped: a magician-in-training. “Sometimes things can happen just like this,” he says with a snap while the camera cuts to his grandmother, bed-ridden, and the appearance of a plate of creamed corn that vanishes as soon as she announces its name.H: A woman named Margaret talks to and through a log. The log, cradled in her arms wherever she goes, becomes a key witness.I: After a seven-minute diegetic dream sequence, which includes a one-armed man, a dwarf, a waltz, a dead girl, a dialogue played backward, and a significantly aged representation of the dreamer, Agent Cooper wakes up and drastically shifts his investigation of a mysterious small-town murder. The dream gives him agency; it turns him from a detective staring at a dead-end to one with a map of clues. The next day, it makes him a storyteller; all the others, sitting tableside in the middle of the woods become a captive audience. They become readers. They read into his dream to create their own scenarios. Exhibit I. The cycle of imagination spins on.Images re-direct and obfuscate meaning, a process of over-determination which Foucault says results in “a multiplication of meanings which override and contradict each other” (DAE 34). In the absence of image, the process of imagination prevails. In the absence of story, real drama in our conscious life, we form complex narratives in our sleep—our imaginative unconscious. Sometimes they leak out, become stories in our waking life, if we think to compose them.“A bargain has been struck,” says Harold, an under-5 bit player, later, in an episode called “Laura’s Secret Diary.” So that she might have the chance to read Laura Palmer’s diary, Donna Hayward agrees to talk about her own life, giving Harold the opportunity to write it down in his notebook: his “living novel” the new chapter which reads, after uncapping his pen and smiling, “Donna Hayward.”He flips to the front page and sets a book weight to keep the page in place. He looks over at Donna sheepishly. “Begin.”Donna begins talking about where she was born, the particulars of her father—the lone town doctor—before she interrupts the script and asks her interviewer about his origin story. Not used to people asking him the questions, Harold’s mouth drops and he stops writing. He puts his free hand to his chest and clears his throat. (The ambient, wind-chime soundtrack intensifies.) “I grew up in Boston,” he finally volunteers. “Well, actually, I grew up in books.”He turns his head from Donna to the notebook, writing feverishly, as if he’s begun to write his own responses as the camera cuts back to his subject, Donna, crossing her legs with both hands cupped at her exposed knee, leaning in to tell him: “There’s things you can’t get in books.”“There’s things you can’t get anywhere,” he returns, pen still in his hand. “When we dream, they can be found in other people.”What is a call to composition if not a call for a response? It is always the audience which makes a work of art, re-framed in our own image, the same way we re-orient ourselves in a dream to negotiate its “inconsistencies.” Bizarreness is merely a consequence of linguistic limitations, the overwhelming sensory dream experience which can only be re-framed via a visual representation. And so the relationship between the experience of reading and dreaming is made explicit when we consider the associations internalised in the reader/audience when ingesting a passage of words on a page or on the stage, objects that become mental images and concept pictures, a lens of perception that we may liken to another art form: the film, with its jump-cuts and dissolves, so much like the defamiliarising and dislocating experience of dreaming, especially for the dreamer who wakes. What else to do in that moment but write about it?Evidence of the bizarre in dreams is only the evidence of the capacity of our human consciousness at work in the unconscious; the moment in which imagination and memory come together to create another reality, a spectrum of reality that doesn’t posit a binary between waking and sleeping, a spectrum of reality that revels in the moments where the two coalesce, merge, cross-pollinate—and what action glides forward in its wake? Sustained un-hesitation and the wish to stay inside one’s self. To be conscious of the world outside the dream means the end of one. To see one’s face in the act of dreaming would require the same act of obliteration. Recognition of the other, and of the self, prevents the process from being fulfilled. Creative production and dreaming, like voyeurism, depend on this same lack of recognition, or the recognition of yourself as other. What else is a dream if not a moment of becoming, of substituting or sublimating yourself for someone else?We are asked to relate a recent dream or we volunteer an account, to a friend or lover. We use the word “seem” in nearly every description, when we add it up or how we fail to. Everything seems to be a certain way. It’s not a place but a feeling. James, another character on Twin Peaks, says the same thing, after someone asks him, “Where do you want to go?” but before he hops on his motorcycle and rides off into the unknowable future outside the frame. Everything seems like something else, based on our own associations, our own knowledge of people and things. Offline memory consolidation. Seeming and semblance. An uncertainty of appearing—both happening and seeing. How we mediate—and re-materialise—the dream through text is our attempt to re-capture imagination, to leave off the image and better become it. If, as Foucault says, the dream is always a dream of death, its purpose is a call to creation.Outside of dreams, something bizarre occurs. We call it novelty or news. We might even bestow it with fame. A man gets on the wrong plane and ends up halfway across the world. A movie is made into the moment of his misfortune. Years later, in real life and in movie time, an Iranian refugee can’t even get on the plane; he is turned away by UK immigration officials at Charles de Gaulle, so he spends the next sixteen years living in the airport lounge; when he departs in real life, the movie (The Terminal, 2004) arrives in theaters. Did it take sixteen years to film the terminal exile? How bizarre, how bizarre. OMC’s eponymous refrain of the 1996 one-hit wonder, which is another way of saying, an anomaly.When all things are counted and countable in today’s algorithmic-rich culture, deviance becomes less of a statistical glitch and more of a testament to human peculiarity; the repressed idiosyncrasies of man before machine but especially the fallible tendencies of mankind within machines—the non-repetition of chance that the Nineties emblematised in the form of its final act. The point is to imagine what comes next; to remember waiting together for the end of the world. There is no need to even open your eyes to see it. It is just a feeling. ReferencesBruns, Axel. “Towards Produsage: Futures for User-Led Content Production.” Cultural Attitudes towards Technology and Communication 2006: Proceedings of the Fifth International Conference, eds. Fay Sudweeks, Herbert Hrachovec, and Charles Ess. Murdoch: School of Information Technology, 2006. 275-84. <https://eprints.qut.edu.au/4863/1/4863_1.pdf>.Foucault, Michel. “Dream, Imagination and Existence.” Dream and Existence. Ed. Keith Hoeller. Pittsburgh: Review of Existential Psychology & Psychiatry, 1986. 31-78.———. “What Is an Author?” The Foucault Reader: An Introduction to Foucault’s Thought. Ed. Paul Rainbow. New York: Penguin, 1991.Glissant, Édouard. Poetics of Relation. Trans. Betsy Wing. Ann Arbor: U of Michigan P, 1997.Hall, Calvin S. The Meaning of Dreams. New York: McGraw Hill, 1966.Hobson, J. Allan. The Dream Drugstore: Chemically Altered State of Conscious­ness. Cambridge: MIT Press, 2001.Menkman, Rosa. The Glitch Moment(um). Amsterdam: Network Notebooks, 2011.Shannon, Claude Elwood. “A Mathematical Theory of Communication.” The Bell System Technical Journal 27 (1948): 379-423.States, Bert O. “Bizarreness in Dreams and Other Fictions.” The Dream and the Text: Essays on Literature and Language. Ed. Carol Schreier Rupprecht. Albany: SUNY P, 1993.Twin Peaks. Dir. David Lynch. ABC and Showtime. 1990-3 & 2017. Wamsley, Erin. “Dreaming and Offline Memory Consolidation.” Current Neurology and Neuroscience Reports 14.3 (2014): 433. “Y2K Bug.” Encyclopedia Britannica. 18 July 2018. <https://www.britannica.com/technology/Y2K-bug>.

APA, Harvard, Vancouver, ISO, and other styles

42

Hermida, Alfred. "From TV to Twitter: How Ambient News Became Ambient Journalism." M/C Journal 13, no.2 (March9, 2010). http://dx.doi.org/10.5204/mcj.220.

Full text

Abstract:

In a TED talk in June 2009, media scholar Clay Shirky cited the devastating earthquake that struck the Sichuan province of China in May 2008 as an example of how media flows are changing. He explained how the first reports of the quake came not from traditional news media, but from local residents who sent messages on QQ, China’s largest social network, and on Twitter, the world’s most popular micro-blogging service. "As the quake was happening, the news was reported," said Shirky. This was neither a unique nor isolated incident. It has become commonplace for the people caught up in the news to provide the first accounts, images and video of events unfolding around them. Studies in participatory journalism suggest that professional journalists now share jurisdiction over the news in the sense that citizens are participating in the observation, selection, filtering, distribution and interpretation of events. This paper argues that the ability of citizens to play “an active role in the process of collecting, reporting, analysing and disseminating news and information” (Bowman and Willis 9) means we need to reassess the meaning of ‘ambient’ as applied to news and journalism. Twitter has emerged as a key medium for news and information about major events, such as during the earthquake in Chile in February 2010 (see, for example, Silverman; Dickinson). This paper discusses how social media technologies such as Twitter, which facilitate the immediate dissemination of digital fragments of news and information, are creating what I have described as “ambient journalism” (Hermida). It approaches real-time, networked digital technologies as awareness systems that offer diverse means to collect, communicate, share and display news and information in the periphery of a user's awareness. Twitter shares some similarities with other forms of communication. Like the telephone, it facilitates a real-time exchange of information. Like instant messaging, the information is sent in short bursts. But it extends the affordances of previous modes of communication by combining these features in both a one-to-many and many-to-many framework that is public, archived and searchable. Twitter allows a large number of users to communicate with each other simultaneously in real-time, based on an asymmetrical relationship between friends and followers. The messages form social streams of connected data that provide value both individually and in aggregate. News All Around The term ‘ambient’ has been used in journalism to describe the ubiquitous nature of news in today's society. In their 2002 study, Hargreaves and Thomas said one of the defining features of the media landscape in the UK was the easy availability of news through a host of media platforms, such as public billboards and mobile phones, and in spaces, such as trains and aircraft. “News is, in a word, ambient, like the air we breathe,” they concluded (44). The availability of news all around meant that citizens were able to maintain an awareness of what was taking place in the world as they went about their everyday activities. One of the ways news has become ambient has been through the proliferation of displays in public places carrying 24-hour news channels or showing news headlines. In her book, Ambient Television, Anna McCarthy explored how television has become pervasive by extending outside the home and dominating public spaces, from the doctor’s waiting room to the bar. “When we search for TV in public places, we find a dense, ambient clutter of public audio-visual apparatuses,” wrote McCarthy (13). In some ways, the proliferation of news on digital platforms has intensified the presence of ambient news. In a March 2010 Pew Internet report, Purcell et al. found that “in the digital era, news has become omnipresent. Americans access it in multiple formats on multiple platforms on myriad devices” (2). It seems that, if anything, digital technologies have increased the presence of ambient news. This approach to the term ‘ambient’ is based on a twentieth century model of mass media. Traditional mass media, from newspapers through radio to television, are largely one-directional, impersonal one-to-many carriers of news and information (McQuail 55). The most palpable feature of the mass media is to reach the many, and this affects the relationship between the media and the audience. Consequently, the news audience does not act for itself, but is “acted upon” (McQuail 57). It is assigned the role of consumer. The public is present in news as citizens who receive information about, and interpretation of, events from professional journalists. The public as the recipient of information fits in with the concept of ambient news as “news which is free at the point of consumption, available on demand and very often available in the background to people’s lives without them even looking” (Hargreaves and Thomas 51). To suggest that members of the audience are just empty receptacles to be filled with news is an oversimplification. For example, television viewers are not solely defined in terms of spectatorship (see, for example, Ang). But audiences have, traditionally, been kept well outside the journalistic process, defined as the “selecting, writing, editing, positioning, scheduling, repeating and otherwise massaging information to become news” (Shoemaker et al. 73). This audience is cast as the receiver, with virtually no sense of agency over the news process. As a result, journalistic communication has evolved, largely, as a process of one-way, one-to-many transmission of news and information to the public. The following section explores the shift towards a more participatory media environment. News as a Social Experience The shift from an era of broadcast mass media to an era of networked digital media has fundamentally altered flows of information. Non-linear, many-to-many digital communication technologies have transferred the means of media production and dissemination into the hands of the public, and are rewriting the relationship between the audience and journalists. Where there were once limited and cost-intensive channels for the distribution of content, there are now a myriad of widely available digital channels. Henry Jenkins has written about the emergence of a participatory culture that “contrasts with older notions of passive media spectatorship. Rather than talking about media producers and consumers occupying separate roles, we might now see them as participants who interact with each other according to a new set of rules that none of us fully understands” (3). Axel Bruns has coined the term “produsage” (2) to refer to the blurred line between producers and consumers, while Jay Rosen has talked about the “people formerly know as the audience.” For some, the consequences of this shift could be “a new model of journalism, labelled participatory journalism,” (Domingo et al. 331), raising questions about who can be described as a journalist and perhaps, even, how journalism itself is defined. The trend towards a more participatory media ecosystem was evident in the March 2010 study on news habits in the USA by Pew Internet. It highlighted that the news was becoming a social experience. “News is becoming a participatory activity, as people contribute their own stories and experiences and post their reactions to events” (Purcell et al. 40). The study found that 37% of Internet users, described by Pew as “news participators,” had actively contributed to the creation, commentary, or dissemination of news (44). This reflects how the Internet has changed the relationship between journalists and audiences from a one-way, asymmetric model of communication to a more participatory and collective system (Boczkowski; Deuze). The following sections considers how the ability of the audience to participate in the gathering, analysis and communication of news and information requires a re-examination of the concept of ambient news. A Distributed Conversation As I’ve discussed, ambient news is based on the idea of the audience as the receiver. Ambient journalism, on the other hand, takes account of how audiences are able to become part of the news process. However, this does not mean that citizens are necessarily producing journalism within the established framework of accounts and analysis through narratives, with the aim of providing accurate and objective portrayals of reality. Rather, I suggest that ambient journalism presents a multi-faceted and fragmented news experience, where citizens are producing small pieces of content that can be collectively considered as journalism. It acknowledges the audience as both a receiver and a sender. I suggest that micro-blogging social media services such as Twitter, that enable millions of people to communicate instantly, share and discuss events, are an expression of ambient journalism. Micro-blogging is a new media technology that enables and extends society's ability to communicate, enabling users to share brief bursts of information from multiple digital devices. Twitter has become one of the most popular micro-blogging platforms, with some 50 million messages sent daily by February 2010 (Twitter). Twitter enables users to communicate with each other simultaneously via short messages no longer than 140 characters, known as ‘tweets’. The micro-blogging platform shares some similarities with instant messaging. It allows for near synchronous communications from users, resulting in a continuous stream of up-to-date messages, usually in a conversational tone. Unlike instant messaging, Twitter is largely public, creating a new body of content online that can be archived, searched and retrieved. The messages can be extracted, analysed and aggregated, providing a measure of activity around a particular event or subject and, in some cases, an indication of the general sentiment about it. For example, the deluge of tweets following Michael Jackson's death in July 2009 has been described as a public and collective expression of loss that indicated “the scale of the world’s shock and sadness” (Cashmore). While tweets are atomic in nature, they are part of a distributed conversation through a social network of interconnected users. To paraphrase David Weinberger's description of the Web, tweets are “many small pieces loosely joined,” (ix). In common with mass media audiences, users may be very widely dispersed and usually unknown to each other. Twitter provides a structure for them to act together as if in an organised way, for example through the use of hashtags–the # symbol–and keywords to signpost topics and issues. This provides a mechanism to aggregate, archive and analyse the individual tweets as a whole. Furthermore, information is not simply dependent on the content of the message. A user's profile, their social connections and the messages they resend, or retweet, provide an additional layer of information. This is called the social graph and it is implicit in social networks such as Twitter. The social graph provides a representation of an individual and their connections. Each user on Twitter has followers, who themselves have followers. Thus each tweet has a social graph attached to it, as does each message that is retweeted (forwarded to other users). Accordingly, social graphs offer a means to infer reputation and trust. Twitter as Ambient Journalism Services such as Twitter can be considered as awareness systems, defined as computer-mediated communication systems “intended to help people construct and maintain awareness of each others’ activities, context or status, even when the participants are not co-located” (Markopoulos et al., v). In such a system, the value does not lie in the individual sliver of information that may, on its own, be of limited value or validity. Rather the value lies in the combined effect of the communication. In this sense, Twitter becomes part of an ambient media system where users receive a flow of information from both established media and from each other. Both news and journalism are ambient, suggesting that “broad, asynchronous, lightweight and always-on communication systems such as Twitter are enabling citizens to maintain a mental model of news and events around them” (Hermida 5). Obviously, not everything on Twitter is an act of journalism. There are messages about almost every topic that often have little impact beyond an individual and their circle of friends, from random thoughts and observations to day-to-day minutiae. But it is undeniable that Twitter has emerged as a significant platform for people to report, comment and share news about major events, with individuals performing some of the institutionalised functions of the professional journalist. Examples where Twitter has emerged as a platform for journalism include the 2008 US presidential elections, the Mumbai attacks in November of 2008 and the January 2009 crash of US Airways flight (Lenhard and Fox 2). In these examples, Twitter served as a platform for first-hand, real-time reports from people caught up in the events as they unfolded, with the cell phone used as the primary reporting tool. For example, the dramatic Hudson River landing of the US Airways flight was captured by ferry passenger Janis Krum, who took a photo with a cell phone and sent it out via Twitter.One of the issues associated with services like Twitter is the speed and number of micro-bursts of data, together with the potentially high signal to noise ratio. For example, the number of tweets related to the disputed election result in Iran in June 2009 peaked at 221,774 in one hour, from an average flow of between 10,000 and 50,000 an hour (Parr). Hence there is a need for systems to aid in selection, organisation and interpretation to make sense of this ambient journalism. Traditionally the journalist has been the mechanism to filter, organise and interpret this information and deliver the news in ready-made packages. Such a role was possible in an environment where access to the means of media production was limited. But the thousands of acts of journalism taking place on Twitter every day make it impossible for an individual journalist to identify the collective sum of knowledge contained in the micro-fragments, and bring meaning to the data. Rather, we should look to the literature on ambient media, where researchers talk about media systems that understand individual desires and needs, and act autonomously on their behalf (for example Lugmayr). Applied to journalism, this suggests a need for tools that can analyse, interpret and contextualise a system of collective intelligence. An example of such a service is TwitterStand, developed by a group of researchers at the University of Maryland (Sankaranarayanan et al.). The team describe TwitterStand as “an attempt to harness this emerging technology to gather and disseminate breaking news much faster than conventional news media” (51). In their paper, they describe in detail how their news processing system is able to identify and cluster news tweets in a noisy medium. They conclude that “Twitter, or most likely a successor of it, is a harbinger of a futuristic technology that is likely to capture and transmit the sum total of all human experiences of the moment” (51). While such a comment may be something of an overstatement, it indicates how emerging real-time, networked technologies are creating systems of distributed journalism.Similarly, the US Geological Survey (USGS) is investigating social media technologies as a way quickly to gather information about recent earthquakes. It has developed a system called the Twitter Earthquake Detector to gather real-time, earthquake-related messages from Twitter and filter the messages by place, time, and keyword (US Department of the Interior). By collecting and analysing the tweets, the USGS believes it can access anecdotal information from citizens about a quake much faster than if it only relied on scientific information from authoritative sources.Both of these are examples of research into the development of tools that help users negotiate and regulate the streams and information flowing through networked media. They address issues of information overload by making sense of distributed and unstructured data, finding a single concept such as news in what Sankaranarayanan et al., say is “akin to finding needles in stacks of tweets’ (43). danah boyd eloquently captured the potential for such as system, writing that “those who are most enamoured with services like Twitter talk passionately about feeling as though they are living and breathing with the world around them, peripherally aware and in tune, adding content to the stream and grabbing it when appropriate.” Conclusion While this paper has focused on Twitter in its discussion of ambient journalism, it is possible that the service may be overtaken by another or several similar digital technologies. This has happened, for example, in the social networking space, with Friendster been supplanted by MySpace and more recently by Facebook. However, underlying services like Twitter are a set of characteristics often referred to by the catchall phrase, the real-time Web. As often with emerging and rapidly developing Internet trends, it can be challenging to define what the real-time Web means. Entrepreneur Ken Fromm has identified a set of characteristics that offer a good starting point to understand the real-time Web. He describes it as a new form of loosely organised communication that is creating a new body of public content in real-time, with a related social graph. In the context of our discussion of the term ‘ambient’, the characteristics of the real-time Web do not only extend the pervasiveness of ambient news. They also enable the former audience to become part of the news environment as it has the means to gather, select, produce and distribute news and information. Writing about changing news habits in the US, Purcell et al. conclude that “people’s relationship to news is becoming portable, personalized, and participatory” (2). Ambient news has evolved into ambient journalism, as people contribute to the creation, dissemination and discussion of news via social media services such as Twitter. To adapt Ian Hargreaves' description of ambient news in his book, Journalism: Truth or Dare?, we can say that journalism, which was once difficult and expensive to produce, today surrounds us like the air we breathe. Much of it is, literally, ambient, and being produced by professionals and citizens. The challenge going forward is helping the public negotiate and regulate this flow of awareness information, facilitating the collection, transmission and understanding of news. References Ang, Ien. Desperately Seeking the Audience. London: Routledge, 1991. Boczkowski, Pablo. J. Digitizing the News: Innovation in Online Newspapers. Cambridge: MIT Press, 2004. boyd, danah. “Streams of Content, Limited Attention.” UX Magazine 25 Feb. 2010. 27 Feb. 2010 ‹http://uxmag.com/features/streams-of-content-limited-attention›. Bowman, Shayne, and Chris Willis. We Media: How Audiences Are Shaping the Future of News and Information. The Media Center, 2003. 10 Jan. 2010 ‹http://www.hypergene.net/wemedia/weblog.php›. Bruns, Axel. Blogs, Wikipedia, Second Life, and Beyond: From Production to Produsage. New York: Peter Lang, 2008. Cashmore, Pete. “Michael Jackson Dies: Twitter Tributes Now 30% of Tweets.” Mashable 25 June 2009. 26 June 2010 ‹http://mashable.com/2009/06/25/michael-jackson-twitter/›. Department of the Interior. “U.S. Geological Survey: Twitter Earthquake Detector (TED).” 13 Jan. 2010. 12 Feb. 2010 ‹http://recovery.doi.gov/press/us-geological-survey-twitter-earthquake-detector-ted/›. Deuze, Mark. “The Web and Its Journalisms: Considering the Consequences of Different Types of Newsmedia Online.” New Media and Society 5 (2003): 203-230. Dickinson, Elizabeth. “Chile's Twitter Response.” Foreign Policy 1 March 2010. 2 March 2010 ‹http://blog.foreignpolicy.com/posts/2010/03/01/chiles_twitter_response›. Domingo, David, Thorsten Quandt, Ari Heinonen, Steve Paulussen, Jane B. Singer and Marina Vujnovic. “Participatory Journalism Practices in the Media and Beyond.” Journalism Practice 2.3 (2008): 326-342. Fromm, Ken. “The Real-Time Web: A Primer, Part 1.” ReadWriteWeb 29 Aug. 2009. 7 Dec. 2009 ‹http://www.readwriteweb.com/archives/the_real-time_web_a_primer_part_1.php›. Hargreaves, Ian. Journalism: Truth or Dare? Oxford: Oxford University Press, 2003. Hargreaves, Ian, and Thomas, James. “New News, Old News.” ITC/BSC, Oct. 2002. 5 Dec. 2009 ‹http://legacy.caerdydd.ac.uk/jomec/resources/news.pdf›. Hermida, Alfred. “Twittering the News: The Emergence of Ambient Journalism.” Journalism Practice. First published on 11 March 2010 (iFirst). 12 March 2010 ‹http://www.informaworld.com/smpp/content~content=a919807525›. Jenkins, Henry. Convergence Culture: Where Old and New Media Collide. New York: New York University Press, 2006. Lenhard, Amanda, and Susannah Fox. “Twitter and Status Updating.” Pew Internet and American Life Project, 12 Feb. 2009. 13 Feb. 2010 ‹http://www.pewinternet.org/Reports/2009/Twitter-and-status-updating.aspx›. Lugmayr, Artur. “The Future Is ‘Ambient.’” Proceedings of SPIE Vol. 6074, 607403 Multimedia on Mobile Devices II. Vol. 6074. Eds. Reiner Creutzburg, Jarmo H. Takala, and Chang Wen Chen. San Jose: SPIE, 2006. Markopoulos, Panos, Boris De Ruyter and Wendy MacKay. Awareness Systems: Advances in Theory, Methodology and Design. Dordrecht: Springer, 2009. McCarthy, Anna. Ambient Television: Visual Culture and Public Space. Durham: Duke University Press, 2001. McQuail, Denis. McQuail’s Mass Communication Theory. London: Sage, 2000. Parr, Ben. “Mindblowing #IranElection Stats: 221,744 Tweets per Hour at Peak.” Mashable 17 June 2009. 10 August 2009 ‹http://mashable.com/2009/06/17/iranelection-crisis-numbers/›. Purcell, Kristen, Lee Rainie, Amy Mitchell, Tom Rosenstiel, and Kenny Olmstead, “Understanding the Participatory News Consumer.” Pew Internet and American Life Project, 1 March 2010. 2 March 2010 ‹http://www.pewinternet.org/Reports/2010/Online-News.aspx?r=1›. Rosen Jay. “The People Formerly Known as the Audience.” Pressthink 27 June 2006. 8 August 2009 ‹http://journalism.nyu.edu/pubzone/weblogs/pressthink/2006/06/27/ppl_frmr.html›. Sankaranarayanan, Jagan, Hanan Samet, Benjamin E. Teitler, Michael D. Lieberman, and Jon Sperling. “TwitterStand: News in Tweets. Proceedings of the 17th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems (GIS '09). New York: ACM, 2009. 42-51. Shirky, Clay. “How Social Media Can Make History.” TED Talks June 2009. 2 March 2010 ‹http://www.ted.com/talks/clay_shirky_how_cellphones_twitter_facebook_can_make_history.html›. Shoemaker, Pamela J., Tim P. Vos, and Stephen D. Reese. “Journalists as Gatekeepers.” Eds. Karin Wahl-Jorgensen and Thomas Hanitzsch, Handbook of Journalism Studies. New York: Routledge, 2008. 73-87. Silverman, Matt. “Chile Earthquake Pictures: Twitter Photos Tell the Story.” Mashable 27 Feb. 2010. 2 March 2010 ‹http://mashable.com/2010/02/27/chile-earthquake-twitpics/›. Singer, Jane. “Strange Bedfellows: The Diffusion of Convergence in Four News Organisations.” Journalism Studies 5 (2004): 3-18. Twitter. “Measuring Tweets.” Twitter blog, 22 Feb. 2010. 23 Feb. 2010 ‹http://blog.twitter.com/2010/02/measuring-tweets.html›. Weinberger, David. Small Pieces, Loosely Joined. Cambridge, MA: Perseus Publishing, 2002.

APA, Harvard, Vancouver, ISO, and other styles

43

Lacroix, Céline Masoni. "From Seriality to Transmediality: A Socio-Narrative Approach of a Skilful and Literate Audience." M/C Journal 21, no.1 (March14, 2018). http://dx.doi.org/10.5204/mcj.1363.

Full text

Abstract:

Screens, as technological but also narrative and social devices, alter reading and writing practices. Users consume vids, read stories on the Web, and produce creative contents on blogs or Web archives, etc. Uses of seriality and transmediality are here discussed, that is watching, reading, and writing as interpreting, as well as respective and reciprocal uses of iteration and interaction (with technologies and with others). A specific figure of users or readers will be defined as a skilful and literate audience: fans on archives (FanFiction.net-FFNet, and Archive of Our Own-AO3). Fans produce serial and transmedia narratives based upon their favourite TV Shows, publish on-line, and often produce discourses or meta-discourse on this writing practice or on writing in general.The broader perspective of reception studies allows us to develop a three-step methodology that develops into a process. The first step is an ethnographic approach based on practices and competencies of users. The second step develops and clarifies the ethnographic dimension into an ethno-narrative approach, which aims at analysing mutual links between signs, texts, and uses of reading and writing. The main question is that of significance and meaning. The third step elaborates upon interactions in a technological and mediated environment. Social, participative, or collaborative and multimodal dimensions of interacting are yet regarded as key elements in reshaping a reading-writing cultural practice. The model proposed is a socio-narrative device, which hangs upon three dimensions: techno-narrative, narratological, and socio-narrative. These three dimensions of a shared narrative universe illustrate the three steps process. Each step also offers specific uses of interacting: an ethnographic approach of fictional expectation, a narrative ethnography of iteration and transformation, and a socio-narrative perspective on dialogism and recognition. A specific but significant example of fans' uses of reading and interacting will illustrate each step of the methodology. This qualitative approach of individual uses aims to be representative of fans' cultural practice (See Appendix 1). We will discuss cultural uses of appropriation. How do reading, interpreting, writing, and rewriting, that is to say interacting, produce meaning, create identities, and build up our relation to others and to the (story)world? Given our interest in embodied and appropriated meanings, appropriation will be revealed as an open cultural process, which can question the conflict and/or the convergence of the old and the new in cultural practices, and the way former and formal dichotomies have to be re-evaluated. We will take an interest in the composition of meaning that unfolds a cultural and critical process, from acknowledgement to recognition, a process where iteration and transformation are no longer opposites but part of a continuum.From Users' Competencies to the Composition of Narrative and Social Skills: A Fictional ExpectationThe pragmatic question of real uses steers our approach toward reading and writing in a mediated environment. Michel de Certeau's work first encourages us to apply his concepts of strategies and tactics to institutional strategies of engaging the audience and to real audience tactics of appropriation or diversion. Real uses are traceable on forums, discussions groups, weblogs, and archives. A model can be built upon digital tracks of use left on fan fiction archives: types of audience, interactions, and types of usage are here considered.Media Types Interaction Types Usage Types Media audienceConsumerSkilfulViewingReadingInformation searchContent production (informative, critical, and creative)Multimedia audienceConsumerSkilful+Online readingE-shoppingSharingRecommendationDiscussionInformative content productionCross-media audienceConsumerSkilful+SerendipityPutting objects in perspectiveNetworkingCritical content productionTransmedia audienceConsumerSkilfulInvolvedPrecursor+Understanding enhanced narrativesValue judgments, evaluationUnderstanding economic dimensions of the media systemCreative content productionTable 1 (Cailler and Masoni Lacroix)Users gear their reading and writing practices toward one medium, or toward multiple media in multi-, cross-, and trans- dimensions. These dimensions engage different and specific kinds of content production, and also the way users think about their relation to the media system. We focus on cumulative uses needed in an evolving media system. Depending on their desire for cultural products issued from creative and entertainment industries, audiences can be consumer-oriented or skilful, but also what we term "involved" or "precursor." Their interactive capacity within these industries allows audiences to produce informative, narrative, discursive, creative (or re-creative), and critical content. An ethnographic approach, based upon uses, understands that accumulating, crossing, and mastering different uses requires available and potential competencies and literacies, which may be immediately usable, or which have to be gained.Figure 1 (Masoni Lacroix and Cailler)The English language enables us to use different words to specify competencies, from ability to skill (when multiple abilities tend towards appropriation), to capability and competency (when multiple skills tend towards cultural practice). This introduces an enhancement process, which describes the way users accumulate and cross competencies to enhance their capability of understanding a multimedia or transmedia system, shaped by multiple semiotic systems and literacies.Abilities and skills represent different literacies that can be distributed in four groups-literacy, graphic literacy, digital literacy and interactive literacy, converging to a core of competencies including cognitive capability, communicative capability, cultural capability and critical capability. Note that critical skills appear below in bold italics. Digital LiteracyTechnical ability / Computational ability / Digital ability or skill Informational skill Visual LiteracyGraphic abilityVisual abilitySemiotic skillSymbolic skill Core of CompetenciesCognitive capabilityCommunicative capabilityCultural capabilityCritical capability Interactive LiteracyInteractional abilitySpectatorial abilityCollective abilityAffective skill LiteracyNarrative ability or skill / Linguistic ability / Reading and interpreting ability / Mimetic and fictional ability Discursive skillTable 2 (Masoni Lacroix and Cailler)Our first illustration exhibits the diversity, even the profuse and confused multiplicity, of cultural influences and preferences of a fan, which he or she comprehends as a whole.Gabihime, born on 6 October in Lafayette, Louisiana, in the United States, joined FFNet in 2001, and last updated her profile in September of 2010. She has written 44 stories for a variety of fandoms, and she belongs to two fandom communities. She has written one story about Twin Peaks (1990-) for an annual fandom gift exchange in 2008. Within Twin Peaks, her favourite and only romantic pair is Audrey Horne and Dale Cooper. Pairing represents a formal and cultural use of fan fiction writing, and also a favourite variation of the original text. Gabihime proposes notes to follow the story:I love Twin Peaks, and I love Audrey Horne particularly, and the rich stilted imagery of the show certainly […] I started watching my favourite season one episodes and reading the script notes for them. When I got to the 4-5 episode break (when Cooper comes back from visiting Jacques's cabin to the delightful sounds of the Icelandic junket roaring at their big shindig and finds Audrey in his bed) I discovered that this scene was originally intended to be left extremely ambiguous.Two main elements can be highlighted. Love founds fans' relation to the characters and the text. Interaction is based on this affect or emotion. Ambiguity, real or presumed, leads to what can be called a fictional expectation. This strong motive to interact within a text means that readers have to fill in the blanks of the text (Jenkins, "Transmedia"). They fill it with their desire for a character, a pairing, and a story. Another illustration of a fan's affective investment, Lynzee005 (see below) specifies that her fiction, "shows what I hope happened in between the scenes to which we were treated in the series."Gabihime does not write fan fiction stories anymore. She has a web site where she posts her stories and links to other fan art, vids, or fiction, as well as a blog where she writes her original fiction, and various meta-narrative and/or meta-discursive productions, including a wiki, Tumblr account, LiveJournal page, and Twitter account.A Narrative Ethnography of Fans' Production Content: Acculturation as Iteration and TransformationWe can briefly focus on another partial but significant example of narratives and discourses of a fan, in the perspective of a qualitative and iterative approach. We will then emphasise that narratives and discourses circulate, in other words that they are written and reformulated in and on different periods and platforms, but also that narratives use iteration and variation (Eco 1985).Lynzee005 was born in 1985 in Canada. She joined FFNet in 2008 and last updated her profile in September 2015. She has a beta profile, which means that she reads and reviews other fans' work-in-progress. We can also clarify that publishing chapter-by-chapter and being re-read on FFNet appears to be a principle of writing and of writing circulation. So, writing reveals an iterative and participative practice.Prior to this updating she wrote:When I read, I look for an emotional connection with the characters and I hope to be genuinely invested in where the story is going. […] I tackle everything in chunks, concentrating on the big issues (consistent characterization, believable plot lines, etc.) before moving down to the smaller ones (spelling, punctuation). Once I finish reading a "chunk," I put it together in the whole and see if it works against the other "chunks," and if not, then I go back and start over.She has written 17 stories for 7 different fandoms. She wrote five stories for Twin Peaks including a crossover with another fandom. She joined AO3 in December 2014 and completed her Twin Peaks trilogy. Her profile no longer underlines this serial process of chunking and dispersal, stressed by Jenkins ("Transmedia"), but only evokes how scenes can be stitched together. She now insists on the outcome of unity or continuity rather than on the process of serialization and fragmentation.Stories about fans, their affective and interpretive relations to a story universe and their uses of reading and writing in and out a fandom, can illustrate a diversity of attachments and interests. We can briefly describe a range of attachments. Attachment to the character, described above, can move towards self-narration, to the exhibit of self both as a person and a character, to a self-distancing, an identity affect. Attachment also has interpretative and critical dimensions. Attached to a narrative universe, attached to storytelling, fans promote a writing normalisation and a narrative format (genre, pairing, tagging, memes, etc.). Every fan seems to iterate and alter this conduct. This appropriation renews self-imposed narrative codes. The use of writing by fans, based on attachments, is both iterative and transformative. The Organization for Transformative Works (OTW), AO3's parent apparatus, asserts that derivative fans' work is transformative.According to Umberto Eco's vision of a postmodern aesthetics of seriality, "Something is offered as original and different […] this something is repeating something else that we already know; and […] just because of it we like it" (167). There is an "enjoyment of variations" (174). "Seriality and repetition are not opposed to innovation" (175). Eco claims a dialectic between repetition and innovation, that is to say a: "dialectic between order and novelty -in other words, between scheme and innovation," where "the variation is no longer more appreciable than the scheme" (173). We acknowledge the "inseparable knot of scheme-variation" he is stressing (Eco 180), and we intend to put narrative fragmentation and narration dispersal forward to their reconstruction in a narrative universe as a whole, within the socio-narrative device. The knot illustrates the dialogical principle of exceeding dichotomies that will be discussed hereunder.The plurality of uses and media calls for an accumulation of competencies, which engage users in the process of media acculturation. A "literate" or skilful user should be able to comprehend "the flow of content across multiple media platforms," the media industries' cooperation, "the migratory behavior of media audiences," and the "technological, industrial, cultural, and social changes" that the word convergence manages to describe (Jenkins, Convergence, 3).Acculturation conveys an appropriation process, borrowed from "French" sociology of uses. Audiences become gradually intimate with the context of the evolving media environment. Scholars progressively understand how audiences are familiarizing themselves with competencies until they master literacies, where competencies are gathered. Users become sensitive, as well as mindful of time and space in literacy (Literacy), and of how writing can be spatialised (Graphic Literacy), of how the media space is technologized (Digital Literacy), and of what kind of structural interactions are emerging (Interactive Literacy).Thus, the research question takes shape: "What kind of interactions can users establish with objects that are both technical and cultural?" Which also means: "In a study of effective uses, can the researcher find appropriation logics or tactics in the way users, specifically here readers and writers, improve their cultural practices?" As Davallon and Le Marec furthered it, uses have to be included in a process of cultural growth. Users can cross technical and cultural dimensions of an object in two main ways: They can compare the object with other cultural products they are used to, or they can grasp its novelty when engaging a cognitive and cultural capability of adaptation. Acknowledgment and adaption are part of the social process of cultural growth. In this sense, use can be an integrated activity or a novel one.The model of cultural growth means that different and dispersed uses are progressively entering a meaning-making process. The question of meaning holds together, even unifies, multiple uses of reading and writing in a cultural practice of reading-writing. With this in mind, the core of competencies described above accurately displays the importance of critical skills (semiotic, informational, affective, symbolic, narrative, and discursive) nourishing a critical capability. Critically literate, users are able to question the place to which they have been attributed and the place they can gain, in an evolving (and even uncertain) media system. They can elaborate a critical reflection on their own practices of reading and writing.Two Principles of a Socio-Narrative Device: Dialogism and RecognitionUses of reading and writing online invite us to visualize and think through the convergence of a narrative object (technical, visual, and cultural), its medium and format(s), and the audiences involved. Here, multimodality has to be (re)considered. This is not only a question of different modes but a question of multiplicity in reading and writing uses, that leads us to the way a fan attachment creates his or her participation in the meaning of the text, and more generally leads us to the polyphonic form of writing questions. Dispersed uses converging into a cultural and social practice bring to light dialogical dimensions of writing, in the sense pointed out by Bakhtin in the early 1930s. Dialogism expands the notion of intertextuality to a social practice; enunciation appears polyphonic, and speakers are interacting. Every discourse is oriented to other discourses, interacting and responding to pre-existing discourses addressing the same object. Discourse is always others' discourse and shows a multiple and inter-relational subject.A fan producing meta-narratives or meta-discourses on media and fan fiction is an inter-relational subject. By way of illustration, Slaymesoftly, displays her stories on AO3, on her own Web site, and on specialized archives. She does not justify fan fiction writing through warnings or disclaimers but defines broadly what fiction is and how she uses fiction in her stories. She analyses publishing, describes her universe and the alternative universes that she explores, and depicts how stories become a series. Slaymesoftly can be considered a literate fan, approaching writing with emotion or attachment and critical rationality, or more precisely, leading her attachment to writing with the distance that critical thought allows. She writes "Essays -about writing, vampires, and whatever else I decide to blather on about" on her Web site or on her LiveJournal, where she also joined a community. In the main, Slaymesoftly experiences multiple variations, in the sense of Eco, variations that oppose and tie a character to a canon, or a loving writing object to what could be newly told. Slaymesoftly also exposes the desire for recognition engaged by fans' uses of interaction. This process of mutual recognition, stated in Hegel's Phenomenology of Spirit highlights and questions fans' attachment, individual identity, and normative foundation. Mutual recognition could strengthen communitarianism or conformism in writing, but it can also offer a way for attachments to be shared, a way to initiate a narrative, and a social practice of dialog.Dialogical dimensions of cultural practices of reading-writing (both in production and reception) design a fragmented narrative universe, unfinished but one, that can be comprehend in a socio-narrative device.Figure 2 (Masoni Lacroix & Cailler)Texts, authors, writers, and readers are not opposed but are part of a socio-narrative continuity. This device crosses three complementary and evolving dimensions of the narrative universe: techno-narrative, socio-narrative (playful, creative, and critical, in their interactivity), and narratological. Uses of literacy generating multimedia, cross-media, and transmedia productions also question the multimodal form of writing and invite us to an iterative, open, dialogical, and interrogative practice of multimodality. A (post)narratological activity opens up to an interrogative practice. This practice dialogs with others' discourse and narrative. The questioning complexity remains open. In a proximate meaning, a transmedia narrative is fragmented, open to incompletion, but enrolled in a continuum (Jenkins, "Transmedia").Looking back, through the overtaken dichotomy between production and reception, a social and narrative process has been described that leads to the reshaping of multiple uses of literacies into cultural practices, and further on, to a cultural and social practice of reading-writing blended into interactivity. Competencies, dictated uses of reading and writing and alterna(rra)tive upsurges (as fans' production content) can be questioned. What can be questioned is either the fragmentation, the incompletion, and the continuity of narratives, that Jenkins no longer brings into conflict ("Transmedia"). This is also what the social and narrative form of dialogism teaches us: dichotomies, as a tool or a structure of thought, appear suspect or no longer significant. There is continuity in the acculturation process, from acknowledgement to recognition, continuity in the multiple uses of interacting, continuity from narrative to discourse, continuity from emotion to writing critically, a transformative continuity in iteration and variation, a polyphonic continuity.ReferencesBakhtin, Michaïl, and V.N. Volosinov. Marxism and the Philosophy of Language. Cambridge: Harvard UP, 1973.Cailler, Bruno, and Céline Masoni Lacroix. "El 'French Touch' Transmediatico: Un Inventario." Transmediación: Espacios, Reflexiones y Experiencias. Eds. Denis Porto Renó et al. Bogotá, Colombia: Editorial Universidad del Rosario, 2012. 181-98.Davallon, Jean, and Joëlle Le Marec. "L'Usage en son Contexte. Sur les Usages des Interactifs des Céderons des Musées." Réseaux 101 (2000): 173-95.De Certeau, Michel. L'Invention du Quotidien. Paris: Folio Essais, 1990.Eco, Umberto. "Innovation and Repetition: Between Modern and Postmodern Aesthetics." Daedalus 114 (1985): 161-84.Hegel, G.W.F. Phénoménologie de l'Esprit. Trans. Bernard Bourgeois. Paris: Vrin, 2006.Jenkins, Henry. Convergence Culture. Where Old and New Media Collide. New York UP, 2006.———. "Transmedia 202: Further Reflections." 2011. <http://henryjenkins.org/2011/08/defining_transmedia_further_re.html>.Masoni Lacroix, Céline. "Mise en Récit des Fictions de Fans de Séries Télévisées: Variations, Granularité et Réflexivité." Tension narrative et Storytelling. Eds. Nicolas Pélissier and Marc Marti. Paris: L'harmattan, 2014. 83-100.———. "Narrativités 2.0: Fragmentation-Organisation d'un Métadiscours." Cahiers de Narratologie 32 (2017). <http://journals.openedition.org/narratologie/7781>.———, and Bruno Cailler. "Fans versus Universitaires, l'Hypothèse Dialogique de la Transmédialité au sein d'un Dispositif Socio-narratif." Revue française des sciences de l'information et de la communication 7 (2015). <http://journals.openedition.org/rfsic/1662>.———, and Bruno Cailler. "Principes Co-extensifs de la Fiction Sérielle, de la Distribution Diffusée à une Pratique Interprétative Dialogique: une Nouvelle Donne Socio-narrative?" Cahiers de Narratologie 31 (2016). <http://narratologie.revues.org/7576>. TV Show Fandoms ExploredBuffy The Vampire Slayer (Joss Whedon).Sherlock (Mark Gatiss & Steven Moffat).Twin Peaks (Mark Frost & David Lynch).Wallander (from Henning Mankell to Philip Martin).

APA, Harvard, Vancouver, ISO, and other styles

44

Cham, Karen, and Jeffrey Johnson. "Complexity Theory." M/C Journal 10, no.3 (June1, 2007). http://dx.doi.org/10.5204/mcj.2672.

Full text

Abstract:

Complex systems are an invention of the universe. It is not at all clear that science has an a priori primacy claim to the study of complex systems. (Galanter 5) Introduction In popular dialogues, describing a system as “complex” is often the point of resignation, inferring that the system cannot be sufficiently described, predicted nor managed. Transport networks, management infrastructure and supply chain logistics are all often described in this way. In socio-cultural terms “complex” is used to describe those humanistic systems that are “intricate, involved, complicated, dynamic, multi-dimensional, interconnected systems [such as] transnational citizenship, communities, identities, multiple belongings, overlapping geographies and competing histories” (Cahir & James). Academic dialogues have begun to explore the collective behaviors of complex systems to define a complex system specifically as an adaptive one; i.e. a system that demonstrates ‘self organising’ principles and ‘emergent’ properties. Based upon the key principles of interaction and emergence in relation to adaptive and self organising systems in cultural artifacts and processes, this paper will argue that complex systems are cultural systems. By introducing generic principles of complex systems, and looking at the exploration of such principles in art, design and media research, this paper argues that a science of cultural systems as part of complex systems theory is the post modern science for the digital age. Furthermore, that such a science was predicated by post structuralism and has been manifest in art, design and media practice since the late 1960s. Complex Systems Theory Complexity theory grew out of systems theory, an holistic approach to analysis that views whole systems based upon the links and interactions between the component parts and their relationship to each other and the environment within they exists. This stands in stark contrast to conventional science which is based upon Descartes’s reductionism, where the aim is to analyse systems by reducing something to its component parts (Wilson 3). As systems thinking is concerned with relationships more than elements, it proposes that in complex systems, small catalysts can cause large changes and that a change in one area of a system can adversely affect another area of the system. As is apparent, systems theory is a way of thinking rather than a specific set of rules, and similarly there is no single unified Theory of Complexity, but several different theories have arisen from the natural sciences, mathematics and computing. As such, the study of complex systems is very interdisciplinary and encompasses more than one theoretical framework. Whilst key ideas of complexity theory developed through artificial intelligence and robotics research, other important contributions came from thermodynamics, biology, sociology, physics, economics and law. In her volume for the Elsevier Advanced Management Series, “Complex Systems and Evolutionary Perspectives on Organisations”, Eve Mitleton-Kelly describes a comprehensive overview of this evolution as five main areas of research: complex adaptive systems dissipative structures autopoiesis (non-equilibrium) social systems chaos theory path dependence Here, Mitleton-Kelly points out that relatively little work has been done on developing a specific theory of complex social systems, despite much interest in complexity and its application to management (Mitleton-Kelly 4). To this end, she goes on to define the term “complex evolving system” as more appropriate to the field than ‘complex adaptive system’ and suggests that the term “complex behaviour” is thus more useful in social contexts (Mitleton-Kelly). For our purpose here, “complex systems” will be the general term used to describe those systems that are diverse and made up of multiple interdependent elements, that are often ‘adaptive’, in that they have the capacity to change and learn from events. This is in itself both ‘evolutionary’ and ‘behavioural’ and can be understood as emerging from the interaction of autonomous agents – especially people. Some generic principles of complex systems defined by Mitleton Kelly that are of concern here are: self-organisation emergence interdependence feedback space of possibilities co-evolving creation of new order Whilst the behaviours of complex systems clearly do not fall into our conventional top down perception of management and production, anticipating such behaviours is becoming more and more essential for products, processes and policies. For example, compare the traditional top down model of news generation, distribution and consumption to the “emerging media eco-system” (Bowman and Willis 14). Figure 1 (Bowman & Willis 10) Figure 2 (Bowman & Willis 12) To the traditional news organisations, such a “democratization of production” (McLuhan 230) has been a huge cause for concern. The agencies once solely responsible for the representation of reality are now lost in a global miasma of competing perspectives. Can we anticipate and account for complex behaviours? Eve Mitleton Kelly states that “if organisations are understood as complex evolving systems co-evolving as part of a social ‘ecosystem’, then that changed perspective changes ways of acting and relating which lead to a different way of working. Thus, management strategy changes, and our organizational design paradigms evolve as new types of relationships and ways of working provide the conditions for the emergence of new organisational forms” (Mitleton-Kelly 6). Complexity in Design It is thus through design practice and processes that discovering methods for anticipating complex systems behaviours seem most possible. The Embracing Complexity in Design (ECiD) research programme, is a contemporary interdisciplinary research cluster consisting of academics and designers from architectural engineering, robotics, geography, digital media, sustainable design, and computing aiming to explore the possibility of trans disciplinary principles of complexity in design. Over arching this work is the conviction that design can be seen as model for complex systems researchers motivated by applying complexity science in particular domains. Key areas in which design and complexity interact have been established by this research cluster. Most immediately, many designed products and systems are inherently complex to design in the ordinary sense. For example, when designing vehicles, architecture, microchips designers need to understand complex dynamic processes used to fabricate and manufacture products and systems. The social and economic context of design is also complex, from market economics and legal regulation to social trends and mass culture. The process of designing can also involve complex social dynamics, with many people processing and exchanging complex heterogeneous information over complex human and communication networks, in the context of many changing constraints. Current key research questions are: how can the methods of complex systems science inform designers? how can design inform research into complex systems? Whilst ECiD acknowledges that to answer such questions effectively the theoretical and methodological relations between complexity science and design need further exploration and enquiry, there are no reliable precedents for such an activity across the sciences and the arts in general. Indeed, even in areas where a convergence of humanities methodology with scientific practice might seem to be most pertinent, most examples are few and far between. In his paper “Post Structuralism, Hypertext & the World Wide Web”, Luke Tredennick states that “despite the concentration of post-structuralism on text and texts, the study of information has largely failed to exploit post-structuralist theory” (Tredennick 5). Yet it is surely in the convergence of art and design with computation and the media that a search for practical trans-metadisciplinary methodologies might be most fruitful. It is in design for interactive media, where algorithms meet graphics, where the user can interact, adapt and amend, that self-organisation, emergence, interdependence, feedback, the space of possibilities, co-evolution and the creation of new order are embraced on a day to day basis by designers. A digitally interactive environment such as the World Wide Web, clearly demonstrates all the key aspects of a complex system. Indeed, it has already been described as a ‘complexity machine’ (Qvortup 9). It is important to remember that this ‘complexity machine’ has been designed. It is an intentional facility. It may display all the characteristics of complexity but, whilst some of its attributes are most demonstrative of self organisation and emergence, the Internet itself has not emerged spontaneously. For example, Tredinnick details the evolution of the World Wide Web through the Memex machine of Vannevar Bush, through Ted Nelsons hypertext system Xanadu to Tim Berners-Lee’s Enquire (Tredennick 3). The Internet was engineered. So, whilst we may not be able to entirely predict complex behavior, we can, and do, quite clearly design for it. When designing digitally interactive artifacts we design parameters or co ordinates to define the space within which a conceptual process will take place. We can never begin to predict precisely what those processes might become through interaction, emergence and self organisation, but we can establish conceptual parameters that guide and delineate the space of possibilities. Indeed this fact is so transparently obvious that many commentators in the humanities have been pushed to remark that interaction is merely interpretation, and so called new media is not new at all; that one interacts with a book in much the same way as a digital artifact. After all, post-structuralist theory had established the “death of the author” in the 1970s – the a priori that all cultural artifacts are open to interpretation, where all meanings must be completed by the reader. The concept of the “open work” (Eco 6) has been an established post modern concept for over 30 years and is commonly recognised as a feature of surrealist montage, poetry, the writings of James Joyce, even advertising design, where a purposive space for engagement and interpretation of a message is designated, without which the communication does not “work”. However, this concept is also most successfully employed in relation to installation art and, more recently, interactive art as a reflection of the artist’s conscious decision to leave part of a work open to interpretation and/or interaction. Art & Complex Systems One of the key projects of Embracing Complexity in Design has been to look at the relationship between art and complex systems. There is a relatively well established history of exploring art objects as complex systems in themselves that finds its origins in the systems art movement of the 1970s. In his paper “Observing ‘Systems Art’ from a Systems-Theroretical Perspective”, Francis Halsall defines systems art as “emerging in the 1960s and 1970s as a new paradigm in artistic practice … displaying an interest in the aesthetics of networks, the exploitation of new technology and New Media, unstable or de-materialised physicality, the prioritising of non-visual aspects, and an engagement (often politicised) with the institutional systems of support (such as the gallery, discourse, or the market) within which it occurs” (Halsall 7). More contemporarily, “Open Systems: Rethinking Art c.1970”, at Tate Modern, London, focuses upon systems artists “rejection of art’s traditional focus on the object, to wide-ranging experiments al focus on the object, to wide-ranging experiments with media that included dance, performance and…film & video” (De Salvo 3). Artists include Andy Warhol, Richard Long, Gilbert & George, Sol Lewitt, Eva Hesse and Bruce Nauman. In 2002, the Samuel Dorsky Museum of Art, New York, held an international exhibition entitled “Complexity; Art & Complex Systems”, that was concerned with “art as a distinct discipline offer[ing] its own unique approache[s] and epistemic standards in the consideration of complexity” (Galanter and Levy 5), and the organisers go on to describe four ways in which artists engage the realm of complexity: presentations of natural complex phenomena that transcend conventional scientific visualisation descriptive systems which describe complex systems in an innovative and often idiosyncratic way commentary on complexity science itself technical applications of genetic algorithms, neural networks and a-life ECiD artist Julian Burton makes work that visualises how companies operate in specific relation to their approach to change and innovation. He is a strategic artist and facilitator who makes “pictures of problems to help people talk about them” (Burton). Clients include public and private sector organisations such as Barclays, Shell, Prudential, KPMG and the NHS. He is quoted as saying “Pictures are a powerful way to engage and focus a group’s attention on crucial issues and challenges, and enable them to grasp complex situations quickly. I try and create visual catalysts that capture the major themes of a workshop, meeting or strategy and re-present them in an engaging way to provoke lively conversations” (Burton). This is a simple and direct method of using art as a knowledge elicitation tool that falls into the first and second categories above. The third category is demonstrated by the ground breaking TechnoSphere, that was specifically inspired by complexity theory, landscape and artificial life. Launched in 1995 as an Arts Council funded online digital environment it was created by Jane Prophet and Gordon Selley. TechnoSphere is a virtual world, populated by artificial life forms created by users of the World Wide Web. The digital ecology of the 3D world, housed on a server, depends on the participation of an on-line public who accesses the world via the Internet. At the time of writing it has attracted over a 100,000 users who have created over a million creatures. The artistic exploration of technical applications is by default a key field for researching the convergence of trans-metadisciplinary methodologies. Troy Innocent’s lifeSigns evolves multiple digital media languages “expressed as a virtual world – through form, structure, colour, sound, motion, surface and behaviour” (Innocent). The work explores the idea of “emergent language through play – the idea that new meanings may be generated through interaction between human and digital agents”. Thus this artwork combines three areas of converging research – artificial life; computational semiotics and digital games. In his paper “What Is Generative Art? Complexity Theory as a Context for Art Theory”, Philip Galanter describes all art as generative on the basis that it is created from the application of rules. Yet, as demonstrated above, what is significantly different and important about digital interactivity, as opposed to its predecessor, interpretation, is its provision of a graphical user interface (GUI) to component parts of a text such as symbol, metaphor, narrative, etc for the multiple “authors” and the multiple “readers” in a digitally interactive space of possibility. This offers us tangible, instantaneous reproduction and dissemination of interpretations of an artwork. Conclusion: Digital Interactivity – A Complex Medium Digital interaction of any sort is thus a graphic model of the complex process of communication. Here, complexity does not need deconstructing, representing nor modelling, as the aesthetics (as in apprehended by the senses) of the graphical user interface conveniently come first. Design for digital interactive media is thus design for complex adaptive systems. The theoretical and methodological relations between complexity science and design can clearly be expounded especially well through post-structuralism. The work of Barthes, Derrida & Foucault offers us the notion of all cultural artefacts as texts or systems of signs, whose meanings are not fixed but rather sustained by networks of relationships. Implemented in a digital environment post-structuralist theory is tangible complexity. Strangely, whilst Philip Galanter states that science has no necessary over reaching claim to the study of complexity, he then argues conversely that “contemporary art theory rooted in skeptical continental philosophy [reduces] art to social construction [as] postmodernism, deconstruction and critical theory [are] notoriously elusive, slippery, and overlapping terms and ideas…that in fact [are] in the business of destabilising apparently clear and universal propositions” (4). This seems to imply that for Galanter, post modern rejections of grand narratives necessarily will exclude the “new scientific paradigm” of complexity, a paradigm that he himself is looking to be universal. Whilst he cites Lyotard (6) describing both political and linguistic reasons why postmodern art celebrates plurality, denying any progress towards singular totalising views, he fails to appreciate what happens if that singular totalising view incorporates interactivity? Surely complexity is pluralistic by its very nature? In the same vein, if language for Derrida is “an unfixed system of traces and differences … regardless of the intent of the authored texts … with multiple equally legitimate meanings” (Galanter 7) then I have heard no better description of the signifiers, signifieds, connotations and denotations of digital culture. Complexity in its entirety can also be conversely understood as the impact of digital interactivity upon culture per se which has a complex causal relation in itself; Qvortups notion of a “communications event” (9) such as the Danish publication of the Mohammed cartoons falls into this category. Yet a complex causality could be traced further into cultural processes enlightening media theory; from the relationship between advertising campaigns and brand development; to the exposure and trajectory of the celebrity; describing the evolution of visual language in media cultures and informing the relationship between exposure to representation and behaviour. In digital interaction the terms art, design and media converge into a process driven, performative event that demonstrates emergence through autopoietic processes within a designated space of possibility. By insisting that all artwork is generative Galanter, like many other writers, negates the medium entirely which allows him to insist that generative art is “ideologically neutral” (Galanter 10). Generative art, like all digitally interactive artifacts are not neutral but rather ideologically plural. Thus, if one integrates Qvortups (8) delineation of medium theory and complexity theory we may have what we need; a first theory of a complex medium. Through interactive media complexity theory is the first post modern science; the first science of culture. References Bowman, Shane, and Chris Willis. We Media. 21 Sep. 2003. 9 March 2007 http://www.hypergene.net/wemedia/weblog.php>. Burton, Julian. “Hedron People.” 9 March 2007 http://www.hedron.com/network/assoc.php4?associate_id=14>. Cahir, Jayde, and Sarah James. “Complex: Call for Papers.” M/C Journal 9 Sep. 2006. 7 March 2007 http://journal.media-culture.org.au/journal/upcoming.php>. De Salvo, Donna, ed. Open Systems: Rethinking Art c. 1970. London: Tate Gallery Press, 2005. Eco, Umberto. The Open Work. Cambridge, Mass.: Harvard UP, 1989. Galanter, Phillip, and Ellen K. Levy. Complexity: Art & Complex Systems. SDMA Gallery Guide, 2002. Galanter, Phillip. “Against Reductionism: Science, Complexity, Art & Complexity Studies.” 2003. 9 March 2007 http://isce.edu/ISCE_Group_Site/web-content/ISCE_Events/ Norwood_2002/Norwood_2002_Papers/Galanter.pdf>. Halsall, Francis. “Observing ‘Systems-Art’ from a Systems-Theoretical Perspective”. CHArt 2005. 9 March 2007 http://www.chart.ac.uk/chart2005/abstracts/halsall.htm>. Innocent, Troy. “Life Signs.” 9 March 2007 http://www.iconica.org/main.htm>. Johnson, Jeffrey. “Embracing Complexity in Design (ECiD).” 2007. 9 March 2007 http://www.complexityanddesign.net/>. Lyotard, Jean-Francois. The Postmodern Condition. Manchester: Manchester UP, 1984. McLuhan, Marshall. The Gutenberg Galaxy: The Making of Typographic Man. Toronto: U of Toronto P, 1962. Mitleton-Kelly, Eve, ed. Complex Systems and Evolutionary Perspectives on Organisations. Elsevier Advanced Management Series, 2003. Prophet, Jane. “Jane Prophet.” 9 March 2007 http://www.janeprophet.co.uk/>. Qvortup, Lars. “Understanding New Digital Media.” European Journal of Communication 21.3 (2006): 345-356. Tedinnick, Luke. “Post Structuralism, Hypertext & the World Wide Web.” Aslib 59.2 (2007): 169-186. Wilson, Edward Osborne. Consilience: The Unity of Knowledge. New York: A.A. Knoff, 1998. Citation reference for this article MLA Style Cham, Karen, and Jeffrey Johnson. "Complexity Theory: A Science of Cultural Systems?." M/C Journal 10.3 (2007). echo date('d M. Y'); ?> <http://journal.media-culture.org.au/0706/08-cham-johnson.php>. APA Style Cham, K., and J. Johnson. (Jun. 2007) "Complexity Theory: A Science of Cultural Systems?," M/C Journal, 10(3). Retrieved echo date('d M. Y'); ?> from <http://journal.media-culture.org.au/0706/08-cham-johnson.php>.

APA, Harvard, Vancouver, ISO, and other styles

45

Broderick, Mick, Stuart Marshall Bender, and Tony McHugh. "Virtual Trauma: Prospects for Automediality." M/C Journal 21, no.2 (April25, 2018). http://dx.doi.org/10.5204/mcj.1390.

Full text

Abstract:

Unlike some current discourse on automediality, this essay eschews most of the analysis concerning the adoption or modification of avatars to deliberately enhance, extend or distort the self. Rather than the automedial enabling of alternative, virtual selves modified by playful, confronting or disarming avatars we concentrate instead on emerging efforts to present the self in hyper-realist, interactive modes. In doing so we ask, what is the relationship between traumatic forms of automediation and the affective impact on and response of the audience? We argue that, while on the one hand there are promising avenues for valuable individual and social engagements with traumatic forms of automediation, there is an overwhelming predominance of suffering as a theme in such virtual depictions, comingled with uncritically asserted promises of empathy, which are problematic as the technology assumes greater mainstream uptake.As Smith and Watson note, embodiment is always a “translation” where the body is “dematerialized” in virtual representation (“Virtually” 78). Past scholarship has analysed the capacity of immersive realms, such as Second Life or online games, to highlight how users can modify their avatars in often spectacular, non-human forms. Critics of this mode of automediality note that users can adopt virtually any persona they like (racial, religious, gendered and sexual, human, animal or hybrid, and of any age), behaving as “identity tourists” while occupying virtual space or inhabiting online communities (Nakamura). Furthermore, recent work by Jaron Lanier, a key figure from the 1980s period of early Virtual Reality (VR) technology, has also explored so-called “homuncular flexibility” which describes the capacity for humans to seemingly adapt automatically to the control mechanisms of an avatar with multiple legs, other non-human appendages, or for two users to work in tandem to control a single avatar (Won et. al.). But this article is concerned less with these single or multi-player online environments and the associated concerns over modifying interactive identities. We are principally interested in other automedial modes where the “auto” of autobiography is automated via Artificial Intelligences (AIs) to convincingly mimic human discourse as narrated life-histories.We draw from case studies promoted by the 2017 season of ABC television’s flagship science program, Catalyst, which opened with semi-regular host and biological engineer Dr Jordan Nguyen, proclaiming in earnest, almost religious fervour: “I want to do something that has long been a dream. I want to create a copy of a human. An avatar. And it will have a life of its own in virtual reality.” As the camera followed Nguyen’s rapid pacing across real space he extolled: “Virtual reality, virtual human, they push the limits of the imagination and help us explore the impossible […] I want to create a virtual copy of a person. A digital addition to the family, using technology we have now.”The troubling implications of such rhetoric were stark and the next third of the program did little to allay such techno-scientific misgivings. Directed and produced by David Symonds, with Nguyen credited as co-developer and presenter, the episode “Meet the Avatars” immediately introduced scenarios where “volunteers” entered a pop-up inner city virtual lab, to experience VR for the first time. The volunteers were shown on screen subjected to a range of experimental VR environments designed to elicit fear and/or adverse and disorienting responses such as vertigo, while the presenter and researchers from Sydney University constantly smirked and laughed at their participants’ discomfort. We can only wonder what the ethics process was for both the ABC and university researchers involved in these broadcast experiments. There is little doubt that the participant/s experienced discomfort, if not distress, and that was televised to a national audience. Presenter Nguyen was also shown misleading volunteers on their way to the VR lab, when one asked “You’re not going to chuck us out of a virtual plane are you?” to which Nguyen replied “I don't know what we’re going to do yet,” when it was next shown that they immediately underwent pre-programmed VR exposure scenarios, including a fear of falling exercise from atop a city skyscraper.The sweat-inducing and heart rate-racing exposures to virtual plank walks high above a cityscape, or seeing subjects haptically viewing spiders crawl across their outstretched virtual hands, all elicited predictable responses, showcased as carnivalesque entertainment for the viewing audience. As we will see, this kind of trivialising of a virtual environment’s capacity for immersion belies the serious use of the technology in a range of treatments for posttraumatic stress disorder (see Rizzo and Koenig; Rothbaum, Rizzo and Difede).Figure 1: Nguyen and researchers enjoying themselves as their volunteers undergo VR exposure Defining AutomedialityIn their pioneering 2008 work, Automedialität: Subjektkonstitution in Schrift, Bild und neuen Medien, Jörg Dünne and Christian Moser coined the term “automediality” to problematise the production, application and distribution of autobiographic modes across various media and genres—from literary texts to audiovisual media and from traditional expression to inter/transmedia and remediated formats. The concept of automediality was deployed to counter the conventional critical exclusion of analysis of the materiality/technology used for an autobiographical purpose (Gernalzick). Dünne and Moser proffered a concept of automediality that rejects the binary division of (a) self-expression determining the mediated form or (b) (self)subjectivity being solely produced through the mediating technology. Hence, automediality has been traditionally applied to literary constructs such as autobiography and life-writing, but is now expanding into the digital domain and other “paratextual sites” (Maguire).As Nadja Gernalzick suggests, automediality should “encourage and demand not only a systematics and taxonomy of the constitution of the self in respectively genre-specific ways, but particularly also in medium-specific ways” (227). Emma Maguire has offered a succinct working definition that builds on this requirement to signal the automedial universally, noting it operates asa way of studying auto/biographical texts (of a variety of forms) that take into account how the effects of media shape the kinds of selves that can be represented, and which understands the self not as a preexisting subject that might be distilled into story form but as an entity that is brought into being through the processes of mediation.Sidonie Smith and Julia Watson point to automediality as a methodology, and in doing so emphasize how the telling or mediation of a life actually shapes the kind of story that can be told autobiographically. They state “media cannot simply be conceptualized as ‘tools’ for presenting a preexisting, essential self […] Media technologies do not just transparently present the self. They constitute and expand it” (Smith and Watson “Virtually Me” 77).This distinction is vital for understanding how automediality might be applied to self-expression in virtual domains, including the holographic avatar dreams of Nguyen throughout Catalyst. Although addressing this distinction in relation to online websites, following P. David Marshall’s description of “the proliferation of the public self”, Maguire notes:The same integration of digital spaces and platforms into daily life that is prompting the development of new tools in autobiography studies […] has also given rise to the field of persona studies, which addresses the ways in which individuals engage in practices of self-presentation in order to form commoditised identities that circulate in affective communities.For Maguire, these automedial works operate textually “to construct the authorial self or persona”.An extension to this digital, authorial construction is apparent in the exponential uptake of screen mediated prosumer generated content, whether online or theatrical (Miller). According to Gernalzick, unlike fictional drama films, screen autobiographies more directly enable “experiential temporalities”. Based on Mary Anne Doane’s promotion of the “indexicality” of film/screen representations to connote the real, Gernalzick suggests that despite semiotic theories of the index problematising realism as an index as representation, the film medium is still commonly comprehended as the “imprint of time itself”:Film and the spectator of film are said to be in a continuous present. Because the viewer is aware, however, that the images experienced in or even as presence have been made in the past, the temporality of the so-called filmic present is always ambiguous” (230).When expressed as indexical, automedial works, the intrinsic audio-visual capacities of film and video (as media) far surpass the temporal limitations of print and writing (Gernalzick, 228). One extreme example can be found in an emergent trend of “performance crime” murder and torture videos live-streamed or broadcast after the fact using mobile phone cameras and FaceBook (Bender). In essence, the political economy of the automedial ecology is important to understand in the overall context of self expression and the governance of content exhibition, access, distribution and—where relevant—interaction.So what are the implications for automedial works that employ virtual interfaces and how does this evolving medium inform both the expressive autobiographical mode and audiences subjectivities?Case StudyThe Catalyst program described above strove to shed new light on the potential for emerging technology to capture and create virtual avatars from living participants who (self-)generate autobiographical narratives interactively. Once past the initial gee-wiz journalistic evangelism of VR, the episode turned towards host Nguyen’s stated goal—using contemporary technology to create an autonomous virtual human clone. Nguyen laments that if he could create only one such avatar, his primary choice would be that of his grandfather who died when Nguyen was two years old—a desire rendered impossible. The awkward humour of the plank walk scenario sequence soon gives way as the enthusiastic Nguyen is surprised by his family’s discomfort with the idea of digitally recreating his grandfather.Nguyen next visits a Southern California digital media lab to experience the process by which 3D virtual human avatars are created. Inside a domed array of lights and cameras, in less than one second a life-size 3D avatar is recorded via 6,000 LEDs illuminating his face in 20 different combinations, with eight cameras capturing the exposures from multiple angles, all in ultra high definition. Called the Light Stage (Debevec), it is the same technology used to create a life size, virtual holocaust survivor, Pinchas Gutter (Ziv).We see Nguyen encountering a life-size, high-resolution 2D screen version of Gutter’s avatar. Standing before a microphone, Nguyen asks a series of questions about Gutter’s wartime experiences and life in the concentration camps. The responses are naturalistic and authentic, as are the pauses between questions. The high definition 4K screen is photo-realist but much more convincing in-situ (as an artifact of the Catalyst video camera recording, in some close-ups horizontal lines of transmission appear). According to the project’s curator, David Traum, the real Pinchas Gutter was recorded in 3D as a virtual holograph. He spent 25 hours providing 1,600 responses to a broad range of questions that the curator maintained covered “a lot of what people want to say” (Catalyst).Figure 2: The Museum of Jewish Heritage in Manhattan presented an installation of New Dimensions in Testimony, featuring Pinchas Gutter and Eva SchlossIt is here that the intersection between VR and auto/biography hybridise in complex and potentially difficult ways. It is where the concept of automediality may offer insight into this rapidly emerging phenomenon of creating interactive, hyperreal versions of our selves using VR. These hyperreal VR personae can be questioned and respond in real-time, where interrogators interact either as casual conversers or determined interrogators.The impact on visitors is sobering and palpable. As Nguyen relates at the end of his session, “I just want to give him a hug”. The demonstrable capacity for this avatar to engender a high degree of empathy from its automedial testimony is clear, although as we indicate below, it could simply indicate increased levels of emotion.Regardless, an ongoing concern amongst witnesses, scholars and cultural curators of memorials and museums dedicated to preserving the history of mass violence, and its associated trauma, is that once the lived experience and testimony of survivors passes with that generation the impact of the testimony diminishes (Broderick). New media modes of preserving and promulgating such knowledge in perpetuity are certainly worthy of embracing. As Stephen Smith, the executive director of the USC Shoah Foundation suggests, the technology could extendto people who have survived cancer or catastrophic hurricanes […] from the experiences of soldiers with post-traumatic stress disorder or survivors of sexual abuse, to those of presidents or great teachers. Imagine if a slave could have told her story to her grandchildren? (Ziv)Yet questions remain as to the veracity of these recorded personae. The avatars are created according to a specific agenda and the autobiographical content controlled for explicit editorial purposes. It is unclear what and why material has been excluded. If, for example, during the recorded questioning, the virtual holocaust survivor became mute at recollecting a traumatic memory, cried or sobbed uncontrollably—all natural, understandable and authentic responses given the nature of the testimony—should these genuine and spontaneous emotions be included along with various behavioural ticks such as scratching, shifting about in the seat and other naturalistic movements, to engender a more profound realism?The generation of the photorealist, mimetic avatar—remaining as an interactive persona long after the corporeal, authorial being is gone—reinforces Baudrillard’s concept of simulacra, where a clone exists devoid of its original entity and unable to challenge its automedial discourse. And what if some unscrupulous hacker managed to corrupt and subvert Gutter’s AI so that it responded antithetically to its purpose, by denying the holocaust ever happened? The ethical dilemmas of such a paradigm were explored in the dystopian 2013 film, The Congress, where Robyn Wright plays herself (and her avatar), as an out of work actor who sells off the rights to her digital self. A movie studio exploits her screen persona in perpetuity, enabling audiences to “become” and inhabit her avatar in virtual space while she is limited in the real world from undertaking certain actions due to copyright infringement. The inability of Wright to control her mimetic avatar’s discourse or action means the assumed automedial agency of her virtual self as an immortal, interactive being remains ontologically perplexing.Figure 3: Robyn Wright undergoing a full body photogrammetry to create her VR avatar in The Congress (2013)The various virtual exposures/experiences paraded throughout Catalyst’s “Meet the Avatars” paradoxically recorded and broadcast a range of troubling emotional responses to such immersion. Many participant responses suggest great caution and sensitivity be undertaken before plunging headlong into the new gold rush mentality of virtual reality, augmented reality, and AI affordances. Catalyst depicted their program subjects often responding in discomfort and distress, with some visibly overwhelmed by their encounters and left crying. There is some irony that presenter Ngyuen was himself relying on the conventions of 2D linear television journalism throughout, adopting face-to-camera address in (unconscious) automedial style to excitedly promote the assumed socio-cultural boon such automedial VR avatars will generate.Challenging AuthenticityThere are numerous ethical considerations surrounding the potential for AIs to expand beyond automedial (self-)expression towards photorealist avatars interacting outside of their pre-recorded content. When such systems evolve it may be neigh impossible to discern on screen whether the person you are conversing with is authentic or an indistinguishable, virtual doppelganger. In the future, a variant on the Turning Test may be needed to challenge and identify such hyperreal simulacra. We may be witnessing the precursor to such a dilemma playing out in the arena of audio-only podcasts, with some public intellectuals such as Sam Harris already discussing the legal and ethical problems from technology that can create audio from typed text that convincingly replicate the actual voice of a person by sampling approximately 30 minutes of their original speech (Harris). Such audio manipulation technology will soon be available to anybody with the motivation and relatively minor level of technological ability in order to assume an identity and masquerade as automediated dialogue. However, for the moment, the ability to convincingly alter a real-time computer generated video image of a person remains at the level of scientific innovation.Also of significance is the extent to which the audience reactions to such automediated expressions are indeed empathetic or simply part of the broader range of affective responses that also include direct sympathy as well as emotions such as admiration, surprise, pity, disgust and contempt (see Plantinga). There remains much rhetorical hype surrounding VR as the “ultimate empathy machine” (Milk). Yet the current use of the term “empathy” in VR, AI and automedial forms of communication seems to be principally focused on the capacity for the user-viewer to ameliorate negatively perceived emotions and experiences, whether traumatic or phobic.When considering comments about authenticity here, it is important to be aware of the occasional slippage of technological terminology into the mainstream. For example, the psychological literature does emphasise that patients respond strongly to virtual scenarios, events, and details that appear to be “authentic” (Pertaub, Slater, and Barker). Authentic in this instance implies a resemblance to a corresponding scenario/activity in the real world. This is not simply another word for photorealism, but rather it describes for instance the experimental design of one study in which virtual (AI) audience members in a virtual seminar room designed to treat public speaking anxiety were designed to exhibit “random autonomous behaviours in real-time, such as twitches, blinks, and nods, designed to encourage the illusion of life” (Kwon, Powell and Chalmers 980). The virtual humans in this study are regarded as having greater authenticity than an earlier project on social anxiety (North, North, and Coble) which did not have much visual complexity but did incorporate researcher-triggered audio clips of audience members “laughing, making comments, encouraging the speaker to speak louder or more clearly” (Kwon, Powell, and Chalmers 980). The small movements, randomly cued rather than according to a recognisable pattern, are described by the researchers as creating a sense of authenticity in the VR environment as they seem to correspond to the sorts of random minor movements that actual human audiences in a seminar can be expected to make.Nonetheless, nobody should regard an interaction with these AIs, or the avatar of Gutter, as in any way an encounter with a real person. Rather, the characteristics above function to create a disarming effect and enable the real person-viewer to willingly suspend their disbelief and enter into a pseudo-relationship with the AI; not as if it is an actual relationship, but as if it is a simulation of an actual relationship (USC). Lucy Suchman and colleagues invoke these ideas in an analysis of a YouTube video of some apparently humiliating human interactions with the MIT created AI-robot Mertz. Their analysis contends that, while it may appear on first glance that the humans’ mocking exchange with Mertz are mean-spirited, there is clearly a playfulness and willingness to engage with a form of AI that is essentially continuous with “long-standing assumptions about communication as information processing, and in the robot’s performance evidence for the limits to the mechanical reproduction of interaction as we know it through computational processes” (Suchman, Roberts, and Hird).Thus, it will be important for future work in the area of automediated testimony to consider the extent to which audiences are willing to suspend disbelief and treat the recounted traumatic experience with appropriate gravitas. These questions deserve attention, and not the kind of hype displayed by the current iteration of techno-evangelism. Indeed, some of this resurgent hype has come under scrutiny. From the perspective of VR-based tourism, Janna Thompson has recently argued that “it will never be a substitute for encounters with the real thing” (Thompson). Alyssa K. Loh, for instance, also argues that many of the negatively themed virtual experiences—such as those that drop the viewer into a scene of domestic violence or the location of a terrorist bomb attack—function not to put you in the position of the actual victim but in the position of the general category of domestic violence victim, or bomb attack victim, thus “deindividuating trauma” (Loh).Future work in this area should consider actual audience responses and rely upon mixed-methods research approaches to audience analysis. In an era of alt.truth and Cambridge Analytics personality profiling from social media interaction, automediated communication in the virtual guise of AIs demands further study.ReferencesAnon. “New Dimensions in Testimony.” Museum of Jewish Heritage. 15 Dec. 2017. 19 Apr. 2018 <http://mjhnyc.org/exhibitions/new-dimensions-in-testimony/>.Australian Broadcasting Corporation. “Meet The Avatars.” Catalyst, 15 Aug. 2017.Baudrillard, Jean. “Simulacra and Simulations.” Jean Baudrillard: Selected Writings. Ed. Mark Poster. Stanford: Stanford UP, 1988. 166-184.Bender, Stuart Marshall. Legacies of the Degraded Image in Violent Digital Media. Basingstoke: Palgrave Macmillan, 2017.Broderick, Mick. “Topographies of Trauma, Dark Tourism and World Heritage: Hiroshima’s Genbaku Dome.” Intersections: Gender and Sexuality in Asia and the Pacific. 24 Apr. 2010. 14 Apr. 2018 <http://intersections.anu.edu.au/issue24/broderick.htm>.Debevec, Paul. “The Light Stages and Their Applications to Photoreal Digital Actors.” SIGGRAPH Asia. 2012.Doane, Mary Ann. The Emergence of Cinematic Time: Modernity, Contingency, the Archive. Cambridge: Harvard UP, 2002.Dünne, Jörg, and Christian Moser. “Allgemeine Einleitung: Automedialität”. Automedialität: Subjektkonstitution in Schrift, Bild und neuen Medien. Eds. Jörg Dünne and Christian Moser. München: Wilhelm Fink, 2008. 7-16.Harris, Sam. “Waking Up with Sam Harris #64 – Ask Me Anything.” YouTube, 16 Feb. 2017. 16 Mar. 2018 <https://www.youtube.com/watch?v=gMTuquaAC4w>.Kwon, Joung Huem, John Powell, and Alan Chalmers. “How Level of Realism Influences Anxiety in Virtual Reality Environments for a Job Interview.” International Journal of Human-Computer Studies 71.10 (2013): 978-87.Loh, Alyssa K. "I Feel You." Artforum, Nov. 2017. 10 Apr. 2018 <https://www.artforum.com/print/201709/alyssa-k-loh-on-virtual-reality-and-empathy-71781>.Marshall, P. David. “Persona Studies: Mapping the Proliferation of the Public Self.” Journalism 15.2 (2014): 153-170.Mathews, Karen. “Exhibit Allows Virtual ‘Interviews’ with Holocaust Survivors.” Phys.org Science X Network, 15 Dec. 2017. 18 Apr. 2018 <https://phys.org/news/2017-09-virtual-holocaust-survivors.html>.Maguire, Emma. “Home, About, Shop, Contact: Constructing an Authorial Persona via the Author Website” M/C Journal 17.9 (2014).Miller, Ken. More than Fifteen Minutes of Fame: The Evolution of Screen Performance. Unpublished PhD Thesis. Murdoch University. 2009.Milk, Chris. “Ted: How Virtual Reality Can Create the Ultimate Empathy Machine.” TED Conferences, LLC. 16 Mar. 2015. <https://www.ted.com/talks/chris_milk_how_virtual_reality_can_create_the_ultimate_empathy_machine>.Nakamura, Lisa. “Cyberrace.” Identity Technologies: Constructing the Self Online. Eds. Anna Poletti and Julie Rak. Madison, Wisconsin: U of Wisconsin P, 2014. 42-54.North, Max M., Sarah M. North, and Joseph R Coble. "Effectiveness of Virtual Environment Desensitization in the Treatment of Agoraphobia." International Journal of Virtual Reality 1.2 (1995): 25-34.Pertaub, David-Paul, Mel Slater, and Chris Barker. “An Experiment on Public Speaking Anxiety in Response to Three Different Types of Virtual Audience.” Presence: Teleoperators and Virtual Environments 11.1 (2002): 68-78.Plantinga, Carl. "Emotion and Affect." The Routledge Companion to Philosophy and Film. Eds. Paisley Livingstone and Carl Plantinga. New York: Routledge, 2009. 86-96.Rizzo, A.A., and Sebastian Koenig. “Is Clinical Virtual Reality Ready for Primetime?” Neuropsychology 31.8 (2017): 877-99.Rothbaum, Barbara O., Albert “Skip” Rizzo, and JoAnne Difede. "Virtual Reality Exposure Therapy for Combat-Related Posttraumatic Stress Disorder." Annals of the New York Academy of Sciences 1208.1 (2010): 126-32.Smith, Sidonie, and Julia Watson. Reading Autobiography: A Guide to Interpreting Life Narratives. 2nd ed. Minneapolis: U of Minnesota P, 2010.———. “Virtually Me: A Toolbox about Online Self-Presentation.” Identity Technologies: Constructing the Self Online. Eds. Anna Poletti and Julie Rak. Madison: U of Wisconsin P, 2014. 70-95.Suchman, Lucy, Celia Roberts, and Myra J. Hird. "Subject Objects." Feminist Theory 12.2 (2011): 119-45.Thompson, Janna. "Why Virtual Reality Cannot Match the Real Thing." The Conversation, 14 Mar. 2018. 10 Apr. 2018 <http://theconversation.com/why-virtual-reality-cannot-match-the-real-thing-92035>.USC. "Skip Rizzo on Medical Virtual Reality: USC Global Conference 2014." YouTube, 28 Oct. 2014. 2 Apr. 2018 <https://www.youtube.com/watch?v=PdFge2XgDa8>.Won, Andrea Stevenson, Jeremy Bailenson, Jimmy Lee, and Jaron Lanier. "Homuncular Flexibility in Virtual Reality." Journal of Computer-Mediated Communication 20.3 (2015): 241-59.Ziv, Stan. “How Technology Is Keeping Holocaust Survivor Stories Alive Forever”. Newsweek, 18 Oct. 2017. 19 Apr. 2018 <http://www.newsweek.com/2017/10/27/how-technology-keeping-holocaust-survivor-stories-alive-forever-687946.html>.

APA, Harvard, Vancouver, ISO, and other styles

46

Fuller, Glen. "The Getaway." M/C Journal 8, no.6 (December1, 2005). http://dx.doi.org/10.5204/mcj.2454.

Full text

Abstract:

From an interview with “Mr A”, executive producer and co-creator of the Getaway in Stockholm (GiS) films: Mr A: Yeah, when I tell my girlfriend, ‘You should watch this, it’s good, it’s a classic, it’s an old movie’ and she thinks it’s, like, the worst. And when I actually look at it and it is the worst, it is just a car chase … [Laughs] But you have to look a lot harder, to how it is filmed, you have to learn … Because, you can’t watch car racing for instance, because they are lousy at filming; you get no sensation of speed. If you watch the World Rally Championship it looks like they go two miles an hour. The hardest thing [of the whole thing] is capturing the speed … I want to engage with the notion of “speed” in terms of the necessary affects of automobility, but first I will give some brief background information on the Getaway in Stockholm series of films. Most of the information on the films is derived from the interview with Mr A carried out over dinner in Stockholm, October 2004. Contact was made via e-mail and I organised with the editors of Autosalon Magazine for an edited transcription to be published as an incentive to participate in the interview. Mr A’s “Tarantino-style” name is necessary because the films he makes with Mr X (co-creator) and a small unnamed group of others involve filming highly illegal acts: one or two cars racing through the streets of Stockholm evading police at sustained speeds well over 200 km/h. Due to a quirk in Swedish traffic law, unless they are caught within a certain time frame of committing driving offences or they actually admit to the driving offences, then they cannot be charged. The Swedish police are so keen to capture these renegade film makers that when they appeared on Efterlyst (pron: ef-de-list; the equivalent of “Sweden’s Most Wanted”) instead of the normal toll-free 1-800 number that viewers could phone to give tips, the number on the screen was the direct line to the chief of Stockholm’s traffic unit. The original GiS film (2000) was made as a dare. Mr A and some friends had just watched Claude Lelouch’s 1976 film C’était un Rendez-vous. Rumour has it that Lelouch had a ten-minute film cartridge and had seen how a gyro stabilised camera worked on a recent film. He decided to make use of it with his Ferrari. He mounted the camera to the bonnet and raced through the streets of Paris. In typical Parisian style at the end of the short nine minute film the driver parks and jumps from the Ferrari to embrace a waiting woman for their “rendezvous”. Shortly after watching the film someone said to Mr A, “you don’t do that sort of thing in Stockholm”. Mr A and Mr X set out to prove him wrong. Nearly all the equipment used in the filming of the first GiS film was either borrowed or stolen. The Porsche used in the film (like all the cars in the films) was lent to them. The film equipment consisted of, in Mr A’s words, a “big ass” television broadcast camera and a smaller “lipstick” camera stolen from the set of the world’s first “interactive” reality TV show called The Bar. (The Bar followed a group of people who all lived together in an apartment and also worked together in a bar. The bar was a “real” bar and served actual customers.) The first film was made for fun, but after Mr A and his associates received several requests for copies they decided to ramp up production to commercial levels. Mr A has a “real job” working in advertising; making the GiS films once a year is his main job with his advertising job being on a self-employed, casual basis. As a production team it is a good example of amateurs becoming semi-professionals within the culture industries. The GiS production team distributes one film per year under the guise of being a “documentary” which allows them to escape the wrath of Swedish authorities due to further legal quirks. Although they still sell DVDs from their Website, the main source of income comes from the sale of the worldwide distribution rights to British “powersports” specialist media company Duke Video. Duke also sells a digitally remastered DVD version of Rendezvous on their Website. As well as these legitimate distribution methods, copies of all six GiS films and Rendezvous are available on the internet through various peer-to-peer file-sharing networks. Mr A says there isn’t much he can do about online file sharing besides asking people to support the franchise if they like the films by buying the DVDs. There are a number of groups making films for car enthusiast using similar guerilla film production methods. However, most of the films are one-offs or do not involve cars driven at such radical speeds. An exception was another Swedish film maker who called himself “Ghostrider” and who produced similar films using a motorbike. Police apprehended a man who they alleged is “Ghostrider” in mid-2004 within the requisite timeframe of an offence that had been allegedly committed. The GiS films alongside these others exist within the automotive cultural industry. The automotive cultural industry is a term I am using to describe the overlap between the automotive industry and the cultural industries of popular culture. The films tap in to a niche market of car enthusiasts. There are many different types of car enthusiasts, everything from petite-bourgeois vintage-car restorers to moral panic-inducing street racers. Obviously the GiS films are targeted more towards the street racing end of the spectrum, which is not surprising because Sweden has a very developed underground street racing scene. A good example is the Stockholm-based “Birka Cup”: a quasi-professional multi-round underground street-racing tournament with 60,000 SEK (approx. AUD$11,000) prize money. The rules and rankings for the tournament are found on the tournament Website. To give some indication of what goes on at these events a short teaser video clip for the 2003 Birka Cup DVD is also available for download from the Website. The GiS films have an element of the exotic European-Other about them, not only because of the street-racing pedigree exemplified by the Birka Cup and similar underground social institutions (such as another event for “import” street racers called the “Stockholm Open”), but because they capture an excess within European car culture normally associated with exotic supercars or the extravagant speeds of cars driven on German autobahns or Italian autostradas. For example, the phrase “European Styling” is often used in Australia to sell European designed “inner-city” cars, such as the GM Holden Barina, a.k.a. the Vauxhall Corsa or the Opel Corsa. Cars from other regional manufacturing zones often do not receive such a specific regional identification; for example, cars built in Asian countries are described as “fully imported” rather than “Asian styling”. Tom O’Dell has noted that dominant conception of automobility in Sweden is different to that of the US. That is, “automobility” needs to be qualified with a national or local context and I assume that other national contexts in Europe would equally be just as different. However, in non-European, mainly post-colonial contexts, such as Australia, the term “European” is an affectation signaling something special. On a different axis, “excess” is directly expressed in the way the police are “captured” in the GiS films. Throughout the GiS series there is a strongly antagonist relation to the police. The initial pre-commercial version of the first GiS film had NWA’s “f*ck the Police” playing over the opening credits. Subsequent commercially-released versions of the film had to change the opening title music due to copyright infringement issues. The “bonus footage” material of subsequent DVDs in the series represents the police as impotent and foolish. Mr A describes it as a kind of “prank” played on police. His rationale is that they live out the fantasy that “everyone” wishes they could do to the police when they are pulled over for speeding and the like; as he puts it, “flipping the bird and driving off”. The police are rendered foolish and captured on film, which is an inversion of the normative traffic-cop-versus-traffic-infringer power relation. Mr A specifies the excess of European modernity to something specific to automobility, which is the near-universal condition of urbanity in most developed nations. The antagonism between the GiS drivers and the police is figured as a duel. The speed of the car(s) obviously exceeds what is socially and legally acceptable and therefore places the drivers in direct conflict with police. The speed captured on film is in part a product of this tension and gives speed a qualitative cultural dimension beyond a simple notion from rectilinear physics of speed as a rate of motion. The qualitative dimension of speed as been noted by Peter Wollen: Speed is not simply thrilling in itself, once sufficiently accelerated, but also enables us to enter exposed and unfamiliar situations, far removed from the zones of safety and normality – to travel into space, for instance, beyond the frontiers of the known. (106) Knowledge is subsumed by the dialect of road safety: “safety” versus “speed”. Knowledge takes on many forms and it is here that speed gains its complexity. In the high-school physics of rectilinear motion speed refers to a rate. Mr A discusses speed as a sensation (“thrill” in the language of Wollen) in the quote at the beginning of the essay. If the body develops sensations from affects and percepts (Deleuze and Guattari 179-83), then what are the affects and percepts that are developed by the body into the sensation of speed? The catchphrase for the GiS films is “Reality Beats Fiction By Far!” The “reality” at stake here is not only the actuality of cars traveling at high speeds within urban spaces, which in the vernacular of automotive popular culture is more “real” than Hollywood representations, but the “reality” of automobilised bodies engaging with and “getting away” from the police. Important here is that the police serve as the symbolic representatives of the governmental institutions and authorities that regulate and discipline populations to be automobilised road users. The police are principally symbolic because one’s road-user body is policed, to a large degree, by one’s self; that is, by the perceptual apparatus that enables us to judge traffic’s rates of movement and gestures of negotiation that are indoctrinated into habit. We do this unthinkingly as part of everyday life. What I want to suggest is that the GiS films tap into the part of our respective bodily perceptual and affective configurations that allow us to exist as road users. To explain this I need to go on a brief detour through “traffic” and its relation to “speed”. Speed serves a functional role within automobilised societies. Contrary to the dominant line from the road safety industry, the “speed limit” we encounter everyday on the road is not so much a limit, but a guide for the self-organisation of traffic. To think the “speed limit” as a limit allows authorities to imagine a particular movement-based threshold of perception and action that bestows upon drivers the ability to negotiate the various everyday hazard-events that constitute the road environment. This is a negative way to look at traffic and is typical of the (post)modernist preoccupation with incorporating contingency (“the accident”) into behavioural protocol and technical design (Lyotard 65-8). It is not surprising that the road safety industry is an exemplary institution of what Gilles Deleuze called the “control society”. The business of the road safety industry is the perpetual modulation of road user populations in a paradoxical attempt to both capture (forecast and study) the social mechanics of the accident-event while postponing its actualisation. Another way to look at traffic is to understand it as a self-organising system. Ilya Prigogine and Robert Herman modeled vehicle traffic as two flows – collective and individual – as a function of the concentration and speed of vehicles. At a certain tipping point the concentration of traffic is such that individual mobility is subsumed by the collective. Speed plays an important role both in the abstract sense of a legislated “speed limit” and as the emergent consistency of mobile road users distributed in traffic. That is, automotive traffic does not move at a constant speed, but nominally moves at a consistent speed. The rate and rhythms of traffic have a consistency that we all must become familiar with to successfully negotiate the everyday system of automobility. For example, someone simply walking becomes a “pedestrian” in the duration of automobilised time-space. Pedestrians must embody a similar sense of the rate of traffic as that perceived by drivers in the cars that constitute traffic. The pedestrian uses this sense of speed when negotiating traffic so as to cross the road, while the driver uses it to maintain a safe distance from the car in front and so on. The shared sense of speed demands an affective complicity of road-user bodies to allow them to seamlessly incorporate themselves into the larger body of traffic on a number of different registers. When road users do not comply with this shared sense of speed that underpins traffic they are met with horn blasts, rude figure gestures, abuse, violence and so on. The affects of traffic are accelerated in the body and developed by the body into the sensations and emotions of “road rage”. Road users must performatively incorporate the necessary dispositions for participating with other road users in traffic otherwise they disrupt the affective script (“habits”) for the production of traffic. When I screened the first GiS film in a seminar in Sweden the room was filled with the sound of horrified gasps. Afterwards someone suggested to me that they (the Swedes) were more shocked than I (an Australian) about the film. Why? Is it because I am a “hoon”? We had all watched the same images heard the same sounds, yet, the “speeds” were not equal. They had experienced the streets in the film as a part of traffic. Their bodies knew just how slow the car was meant to be going. The film captured and transmitted the affects of a different automobilised body. Audiences follow the driver “getting away” from those universally entrusted (at least on a symbolic level) with the governance of traffic – the police – while, for a short period, becoming a new body that gets away from the “practiced perception” (Massumi 189) of habits that normatively enable the production of traffic. What is captured in the film – the event of the getaway – has the potential to develop in the body of the spectator as the sensation of “speed” and trigger a getaway of the body. Acknowledgement I would like to acknowledge the generous funding from the Centre for Cultural Research and the College of Arts, Education and Social Sciences, University of Western Sydney, in awarding me the 2004 CCR CAESS Postgraduate International Scholarship, and the support from my colleagues at the Advanced Cultural Studies Institute of Sweden where I carried out this research as a doctoral exchange student. References Deleuze, Gilles. “Postscript on Control Societies”. Negotiations. Trans. Martin Joughin. New York: Columbia UP, 1995. Deleuze, Gilles, and Felix Guattari. What Is Philosophy? Trans. Graham Burchill and Hugh Tomlinson. London: Verso, 1994. Getaway in Stockholm series. 21 Oct. 2005 http://www.getawayinstockholm.com>. Lyotard, Jean François. The Inhuman: Reflections on Time. Trans. Geoffrey Bennington and Rachel Bowlby. Stanford, California: Stanford UP, 1991. Massumi, Brian. “Parables for the Virtual: Movement, Affect, Sensation”. Post-Contemporary Interventions. Eds. Stanley Fish and Fredric Jameson. Durham, London: Duke UP, 2002. O’Dell, Tom. “Raggare and the Panic of Mobility: Modernity and Everyday Life in Sweden.” Car Culture. Ed. Daniel Miller. Oxford: Berg, 2001. 105-32. Prigogine, Ilya, and Robert Herman. “A Two-Fluid Approach to Town Traffic.” Science 204 (1979): 148-51. Wollen, Peter. “Speed and the Cinema.” New Left Review 16 (2002): 105–14. Citation reference for this article MLA Style Fuller, Glen. "The Getaway." M/C Journal 8.6 (2005). echo date('d M. Y'); ?> <http://journal.media-culture.org.au/0512/07-fuller.php>. APA Style Fuller, G. (Dec. 2005) "The Getaway," M/C Journal, 8(6). Retrieved echo date('d M. Y'); ?> from <http://journal.media-culture.org.au/0512/07-fuller.php>.

APA, Harvard, Vancouver, ISO, and other styles

47

Hands, Joss. "Device Consciousness and Collective Volition." M/C Journal 16, no.6 (November6, 2013). http://dx.doi.org/10.5204/mcj.724.

Full text

Abstract:

The article will explore the augmentation of cognition with the affordances of mobile micro-blogging apps, specifically the most developed of these: Twitter. It will ask whether this is enabling new kinds of on-the-fly collective cognition, and in particular what will be referred to as ‘collective volition.’ It will approach this with an address to Bernard Stiegler’s concept of grammatisation, which he defines as as, “the history of the exteriorization of memory in all its forms: nervous and cerebral memory, corporeal and muscular memory, biogenetic memory” (New Critique 33). This will be explored in particular with reference to the human relation with the time of protention, that is an orientation to the future in the lived moment. The argument is that there is a new relation to technology, as a result of the increased velocity, multiplicity and ubiquity of micro-communications. As such this essay will serve as a speculative hypothesis, laying the groundwork for further research. The Context of Social Media The proliferation of social media, and especially its rapid shift onto diverse platforms, in particular to ‘apps’—that is dedicated software platforms available through multiple devices such as tablet computers and smart phones—has meant a pervasive and intensive form of communication has developed. The fact that these media are also generally highly mobile, always connected and operate though very sophisticated interfaces designed for maximum ease of use mean that, at least for a significant number of users, social media has become a constant accompaniment to everyday life—a permanently unfolding self-narrative. It is against this background that multiple and often highly contradictory claims are being made about the effect of such media on cognition and group dynamics. We have seen claims for the birth of the smart mob (Rheingold) that opens up the realm of decisive action to multiple individuals and group dynamics, something akin to that which operates during moments of shared attention. For example, in the London riots of 2011 the use of Blackberry messenger was apportioned a major role in the way mobs moved around the city, where they gathered and who turned up. Likewise in the Arab Spring there was significant speculation about the role of Twitter as a medium for mass organisation and collective action. Why such possibilities are mooted is clear in the basic affordances of the particular social media in question, and the devices through which these software platforms operate. In the case of Twitter it is clear that simplicity of its interface as well as its brevity and speed are the most important affordances. The ease of the interface, the specificity of the action—of tweeting or scrolling though a feed—is easy. The limitation of messages at 140 characters ensures that nothing takes more than a small bite of attention and that it is possible, and routine, to process many messages and to communicate with multiple interlocutors, if not simultaneously then in far faster succession that is possible in previous applications or technologies. This produces a form of distributed attention, casting a wide zone of social awareness, in which the brains of Twitter users process, and are able to respond to, the perspectives of others almost instantly. Of course the speed of the feed that, beyond a relatively small number of followed accounts, means it becomes impossible to see anything but fragments. This fragmentary character is also intensified by the inevitable limitation of the number of accounts being followed by any one user. In fact we can add a third factor of intensification to this when we consider the migration of social media into mobile smart phone apps using simple icons and even simpler interfaces, configured for ease of use on the move. Such design produces an even greater distribution of attention and temporal fragmentation, interspersed as they are with multiple everyday activities. Mnemotechnology: Spatial and Temporal Flux Attending to a Twitter feed thus places the user into an immediate relationship to the aggregate of the just passed and the passing through, a proximate moment of shared expression, but also one that is placed in a cultural short term memory. As such Twitter is thus a mnemotechnology par-excellence, in that it augments human memory, but in a very particular way. Its short termness distributes memory across and between users as much, if not more, than it does extend memory through time. While most recent media forms also enfold their own recording and temporal extension—print media, archived in libraries; film and television in video archives; sound and music in libraries—tweeting is closer to the form of face to face speech, in that while it is to an extent grammatised into the Twitter feed its temporal extension is far more ambiguous. With Twitter, while there is some cerebral/linguistic memory extension—over say a few minutes in a particular feed, or a number of days if a tweet is given a hash tag—beyond this short-term extension any further access becomes a question of paying for access (after a few days hash tags cease to be searchable, with large archives of tweets being available only at a monetary cost). The luxury of long-term memory is available only to those that can afford it. Grammatisation in Stiegler’s account tends to the solidifying extension of expression into material forms of greater duration, forming what he calls the pharmakon, that is an external object, which is both poison and cure. Stiegler employs Donald Winnicott’s concept of the transitional object as the first of such objects in the path to adulthood, that is the thing—be it blanket, teddy or so forth—that allows the transition from total dependency on a parent to separation and autonomy. In that sense the object is what allows for the transition to adulthood, but within which lies the danger of excessive attachment, dependency and is "destructive of autonomy and trust" (Stiegler, On Pharmacology 3). Writing, or hypomnesis, that is artificial memory, is also such a pharmakon, in as much as it operates as a salve; it allows cultural memory to be extended and shared, but also according to Plato it decays autonomy of thought, but in fact—taking his lead from Derrida—Stiegler tells us that “while Plato opposes autonomy and heteronomy, they in fact constantly compose” (2). The digital pharmakon, according to Stiegler, is the extension of this logic to the entire field of the human body, including in cognitive capitalism wherein "those economic actors who are without knowledge because they are without memory" (35). This is the essence of contemporary proletarianisation, extended into the realm of consumption, in which savour vivre, knowing how to live, is forgotten. In many ways we can see Twitter as a clear example of such a proletarianisation process, as hypomnesis, with its derivation of hypnosis; an empty circulation. This echoes Jodi Dean’s description of the flow of communicative capitalism as simply drive (Dean) in which messages circulate without ever getting where they are meant to go. Yet against this perhaps there is a gain, even in Stiegler’s own thought, as to the therapeutic or individuating elements of this process and within the extension of Tweets from an immediately bounded, but extensible and arbitrary distributed network, provides a still novel form of mediation that connects brains together; but going beyond the standard hyper-dyadic spread that is characteristic of viruses or memes. This spread happens in such a way that the expressed thoughts of others can circulate and mutate—loop—around in observable forms, for example in the form of replies, designation of favourite, as RTs (retweets) and in modified forms as MTs (modified tweets), followed by further iterations, and so on. So it is that the Twitter feeds of clusters of individuals inevitably start to show regularity in who tweets, and given the tendency of accounts to focus on certain issues, and for those with an interest in those issues to likewise follow each other, then we have groups of accounts/individuals intersecting with each other, re-tweeting and commenting on each other–forming clusters of shared opinion. The issue at stake here goes beyond the question of the evolution of such clusters at that level of linguistic exchange as, what might be otherwise called movements, or counter-publics, or issue networks—but that speed produces a more elemental effect on coordination. It is the speed of Twitter that creates an imperative to respond quickly and to assimilate vast amounts of information, to sort the agreeable from the disagreeable, divide that which should be ignored from that which should be responded to, and indeed that which calls to be acted upon. Alongside Twitter’s limited memory, its pharmacological ‘beneficial’ element entails the possibility that responses go beyond a purely linguistic or discursive interlocution towards a protection of ‘brain-share’. That is, to put it bluntly, the moment of knowing what others will think before they think it, what they will say before they say it and what they will do before they do it. This opens a capacity for action underpinned by confidence in a solidarity to come. We have seen this in numerous examples, in the actions of UK Uncut and other such groups and movements around the world, most significantly as the multi-media augmented movements that clustered in Tahrir Square, Zuccotti Park and beyond. Protention, Premediation, and Augmented Volition The concept of the somatic marker plays an important role in enabling this speed up. Antonio Damasio argues that somatic markers are emotional memories that are layered into our brains as desires and preferences, in response to external stimuli they become embedded in our unconscious brain and are triggered by particular situations or events. They produce a capacity to make decisions, to act in ways that our deliberate decision making is not aware of; given the pace of response that is needed for many decisions this is a basic necessity. The example of tennis players is often used in this context, wherein the time needed to process and react consciously to a serve is in excess of the processing time the conscious brain requires; that is there is at least a 0.5 second gap between the brain receiving a stimulus and the conscious mind registering and reacting to it. What this means is that elements of the brain are acting in advance of conscious volition—we preempt our volitions with the already inscribed emotional, or affective layer, protending beyond the immanent into the virtual. However, protention is still, according to Stiegler, a fundamental element of consciousness—it pushes forward into the brain’s awareness of continuity, contributing to its affective reactions, rooted in projection and risk. This aspect of protention therefore is a contributing element of volition as it rises into consciousness. Volition is the active conscious aspect of willing, and as such requires an act of protention to underpin it. Thus the element of protention, as Stiegler describes it, is inscribed in the flow of the Twitter feed, but also and more importantly, is written into the cognitive process that proceeds and frames it. But beyond this even is the affective and emotional element. This allows us to think then of the Twitter-brain assemblage to be something more than just a mechanism, a tool or simply a medium in the linear sense of the term, but something closer to a device—or a dispositif as defined by Michel Foucault (194) and developed by Giorgio Agamben. A dispositif gathers together, orders and processes, but also augments. Maurizio Lazzarato uses the term, explaining that: The machines for crystallizing or modulating time are dispositifs capable of intervening in the event, in the cooperation between brains, through the modulation of the forces engaged therein, thereby becoming preconditions for every process of constitution of whatever subjectivity. Consequently the process comes to resemble a harmonization of waves, a polyphony. (186) This is an excellent framework to consolidate the place of Twitter as just such a dispositif. In the first instance the place of Twitter in “crystallizing or modulating” time is reflected in its grammatisation of the immediate into a circuit that reframes the present moment in a series of ripples and echoes, and which resonates in the protentions of the followers and followed. This organising of thoughts and affections in a temporal multiplicity crosscuts events, to the extent that the event is conceived as something new that enters the world. So it is that the permanent process of sharing, narrating and modulating, changes the shape of events from pinpointed moments of impact into flat plains, or membranes, that intersect with the mental events. The brain-share, or what can be called a ‘brane’ of brains, unfolds both spatially and temporally, but within the limits already defined. This ‘brane’ of brains can be understood in Lazzarato’s terms precisely as a “harmonization of waves, a polyphony.” The dispositif produces this, in the first instance, modulated consciousness—this is not to say this is an exclusive form of consciousness—part of a distributed condition that provides for a cooperation between brains, the multifarious looping mentioned above, that in its protentions forms a harmony, which is a volition. It is therefore clear that this technological change needs to be understood together with notions such as ‘noopolitics’ and ‘neuropolitics’. Maurizio Lazzarato captures very well the notion of a noopolitics when he tells us that “We could say that noopolitics commands and reorganizes the other power relations because it operates at the most deterritorialized level (the virtuality of the action between brains)” (187). However, the danger here is well-defined in the writings of Stiegler, when he explains that: When technologically exteriorized, memory can become the object of sociopolitical and biopolitical controls through the economic investments of social organizations, which thereby rearrange psychic organizations through the intermediary of mnenotechnical organs, among which must be counted machine-tools. (New Critique 33) Here again, we find a proletarianisation, in which gestures, knowledge, how to, become—in the medium and long term—separated from the bodies and brains of workers and turned into mechanisms that make them forget. There is therefore a real possibility that the short term resonance and collective volition becomes a distorted and heightened state, with a rather unpalatable after-effect, in which the memories remain only as commodified digital data. The question is whether Twitter remembers it for us, thinks it for us and as such also, in its dislocations and short termism, obliterates it? A scenario wherein general intellect is reduced to a state of always already forgetting. The proletarian, we read in Gilbert Simondon, is a disindividuated worker, a labourer whose knowledge has passed into the machine in such a way that it is no longer the worker who is individuated through bearing tools and putting them into practice. Rather, the labourer serves the machine-tool, and it is the latter that has become the technical individual. (Stiegler, New Critique 37) Again, this pharmacological character is apparent—Stiegler says ‘the Internet is a pharmakon’ blurring both ‘distributed’ and ‘deep’ attention (Crogan 166). It is a marketing tool par-excellence, and here its capacity to generate protention operates to create not only a collective ‘volition’ but a more coercive collective disposition or tendency, that is the unconscious wiling or affective reflex. This is something more akin to what Richard Grusin refers to as premediation. In premediation the future has already happened, not in the sense that it has already actually happened but such is the preclusion of paths of possibility that cannot be conceived otherwise. Proletarianisation operates in this way through the app, writing in this mode is not as thoughtful exchange between skilled interlocutors, but as habitual respondents to a standard set of pre-digested codes (in the sense of both programming and natural language) ready to hand to be slotted into place. Here the role of the somatic marker is predicated on the layering of ideology, in its full sense, into the brain’s micro-level trained reflexes. In that regard there is a proletarianisation of the prosumer, the idealised figure of the Web 2.0 discourse. However, it needs to be reiterated that this is not the final say on the matter, that where there is volition, and in particular collective volition, there is also the possibility of a reactivated general will: a longer term common consciousness in the sense of class consciousness. Therefore the general claim being made here is that by taking hold of this device consciousness, and transforming it into an active collective volition we stand the best chance of finding “a political will capable of moving away from the economico-political complex of consumption so as to enter into the complex of a new type of investment, or in other words in an investment in common desire” (Stiegler, New Critique 6). In its most simplistic form this requires a new political economy of commoning, wherein micro-blogging services contribute to a broader augmented volition that is not captured within communicative capitalism, coded to turn volition into capital, but rather towards a device consciousness as common desire. Needless to say it is only possible here to propose such an aim as a possible path, but one that is surely worthy of further investigation. References Agamben, Giorgio. What Is an Apparatus? Palo Alto: Stanford University Press, 2009. Crogan, Patrick. “Knowledge, Care, and Transindividuation: An Interview with Bernard Stiegler.” Cultural Politics 6.2 (2010): 157-170. Damasio, Antonio. Self Comes to Mind. London: Heinemann, 2010. Dean, Jodi. Blog Theory. Cambridge: Polity Press, 2010. Foucault, Michel. “The Confession of the Flesh.” Power/Knowledge Selected Interviews and Other Writings. Ed. Colin Gordon. New York: Pantheon. 1980. Grusin, Richard. Pre-mediation. Basingstoke: Palgrave, 2011. Lazzarato, Maurizio. “Life and the Living in the Societies of Control.” Deleuze and the Social. Eds. Martin Fuglsang and Meier Sorensen Bent. Edinburgh: Edinburgh University Press, 2006. Rheingold, Howard. Smart Mobs. Cambridge, Mass.: Perseus Books, 2002. Stiegler, Bernard. For a New Critique of Political Economy. Cambridge: Polity Press, 2010. ———. What Makes Life Worth Living: On Pharmacology. Cambridge: Polity Press, 2013.

APA, Harvard, Vancouver, ISO, and other styles

48

Khamis, Susie. "Nespresso: Branding the "Ultimate Coffee Experience"." M/C Journal 15, no.2 (May2, 2012). http://dx.doi.org/10.5204/mcj.476.

Full text

Abstract:

Introduction In December 2010, Nespresso, the world’s leading brand of premium-portioned coffee, opened a flagship “boutique” in Sydney’s Pitt Street Mall. This was Nespresso’s fifth boutique opening of 2010, after Brussels, Miami, Soho, and Munich. The Sydney debut coincided with the mall’s upmarket redevelopment, which explains Nespresso’s arrival in the city: strategic geographic expansion is key to the brand’s growth. Rather than panoramic ubiquity, a retail option favoured by brands like McDonalds, KFC and Starbucks, Nespresso opts for iconic, prestigious locations. This strategy has been highly successful: since 2000 Nespresso has recorded year-on-year per annum growth of 30 per cent. This has been achieved, moreover, despite a global financial downturn and an international coffee market replete with brand variety. In turn, Nespresso marks an evolution in the coffee market over the last decade. The Nespresso Story Founded in 1986, Nespresso is the fasting growing brand in the Nestlé Group. Its headquarters are in Lausanne, Switzerland, with over 7,000 employees worldwide. In 2012, Nespresso had 270 boutiques in 50 countries. The brand’s growth strategy involves three main components: premium coffee capsules, “mated” with specially designed machines, and accompanied by exceptional customer service through the Nespresso Club. Each component requires some explanation. Nespresso offers 16 varieties of Grand Crus coffee: 7 espresso blends, 3 pure origin espressos, 3 lungos (for larger cups), and 3 decaffeinated coffees. Each 5.5 grams of portioned coffee is cased in a hermetically sealed aluminium capsule, or pod, designed to preserve the complex, volatile aromas (between 800 and 900 per pod), and prevent oxidation. These capsules are designed to be used exclusively with Nespresso-branded machines, which are equipped with a patented high-pressure extraction system designed for optimum release of the coffee. These machines, of which there are 28 models, are developed with 6 machine partners, and Antoine Cahen, from Ateliers du Nord in Lausanne, designs most of them. For its consumers, members of the Nespresso Club, the capsules and machines guarantee perfect espresso coffee every time, within seconds and with minimum effort—what Nespresso calls the “ultimate coffee experience.” The Nespresso Club promotes this experience as an everyday luxury, whereby café-quality coffee can be enjoyed in the privacy and comfort of Club members’ homes. This domestic focus is a relatively recent turn in its history. Nestlé patented some of its pod technology in 1976; the compatible machines, initially made in Switzerland by Turmix, were developed a decade later. Nespresso S. A. was set up as a subsidiary unit within the Nestlé Group with a view to target the office and fine restaurant sector. It was first test-marketed in Japan in 1986, and rolled out the same year in Switzerland, France and Italy. However, by 1988, low sales prompted Nespresso’s newly appointed CEO, Jean-Paul Gillard, to rethink the brand’s focus. Gillard subsequently repositioned Nespresso’s target market away from the commercial sector towards high-income households and individuals, and introduced a mail-order distribution system; these elements became the hallmarks of the Nespresso Club (Markides 55). The Nespresso Club was designed to give members who had purchased Nespresso machines 24-hour customer service, by mail, phone, fax, and email. By the end of 1997 there were some 250,000 Club members worldwide. The boom in domestic, user-friendly espresso machines from the early 1990s helped Nespresso’s growth in this period. The cumulative efforts by the main manufacturers—Krups, Bosch, Braun, Saeco and DeLonghi—lowered the machines’ average price to around US $100 (Purpura, “Espresso” 88; Purpura, “New” 116). This paralleled consumers’ growing sophistication, as they became increasingly familiar with café-quality espresso, cappuccino and latté—for reasons to be detailed below. Nespresso was primed to exploit this cultural shift in the market and forge a charismatic point of difference: an aspirational, luxury option within an increasingly accessible and familiar field. Between 2006 and 2008, Nespresso sales more than doubled, prompting a second production factory to supplement the original plant in Avenches (Simonian). In 2008, Nespresso grew 20 times faster than the global coffee market (Reguly B1). As Nespresso sales exceeded $1.3 billion AU in 2009, with 4.8 billion capsules shipped out annually and 5 million Club members worldwide, it became Nestlé’s fastest growing division (Canning 28). According to Nespresso’s Oceania market director, Renaud Tinel, the brand now represents 8 per cent of the total coffee market; of Nespresso specifically, he reports that 10,000 cups (using one capsule per cup) were consumed worldwide each minute in 2009, and that increased to 12,300 cups per minute in 2010 (O’Brien 16). Given such growth in such a brief period, the atypical dynamic between the boutique, the Club and the Nespresso brand warrants closer consideration. Nespresso opened its first boutique in Paris in 2000, on the Avenue des Champs-Élysées. It was a symbolic choice and signalled the brand’s preference for glamorous precincts in cosmopolitan cities. This has become the design template for all Nespresso boutiques, what the company calls “brand embassies” in its press releases. More like art gallery-style emporiums than retail spaces, these boutiques perform three main functions: they showcase Nespresso coffees, machines and accessories (all elegantly displayed); they enable Club members to stock up on capsules; and they offer excellent customer service, which invariably equates to detailed production information. The brand’s revenue model reflects the boutique’s role in the broader business strategy: 50 per cent of Nespresso’s business is generated online, 30 per cent through the boutiques, and 20 per cent through call centres. Whatever floor space these boutiques dedicate to coffee consumption is—compared to the emphasis on exhibition and ambience—minimal and marginal. In turn, this tightly monitored, self-focused model inverts the conventional function of most commercial coffee sites. For several hundred years, the café has fostered a convivial atmosphere, served consumers’ social inclinations, and overwhelmingly encouraged diverse, eclectic clientele. The Nespresso boutique is the antithesis to this, and instead actively limits interaction: the Club “community” does not meet as a community, and is united only in atomised allegiance to the Nespresso brand. In this regard, Nespresso stands in stark contrast to another coffee brand that has been highly successful in recent years—Starbucks. Starbucks famously recreates the aesthetics, rhetoric and atmosphere of the café as a “third place”—a term popularised by urban sociologist Ray Oldenburg to describe non-work, non-domestic spaces where patrons converge for respite or recreation. These liminal spaces (cafés, parks, hair salons, book stores and such locations) might be private, commercial sites, yet they provide opportunities for chance encounters, even therapeutic interactions. In this way, they aid sociability and civic life (Kleinman 193). Long before the term “third place” was coined, coffee houses were deemed exemplars of egalitarian social space. As Rudolf P. Gaudio notes, the early coffee houses of Western Europe, in Oxford and London in the mid-1600s, “were characterized as places where commoners and aristocrats could meet and socialize without regard to rank” (670). From this sanguine perspective, they both informed and animated the modern public sphere. That is, and following Habermas, as a place where a mixed cohort of individuals could meet and discuss matters of public importance, and where politics intersected society, the eighteenth-century British coffee house both typified and strengthened the public sphere (Karababa and Ger 746). Moreover, and even from their early Ottoman origins (Karababa and Ger), there has been an historical correlation between the coffee house and the cosmopolitan, with the latter at least partly defined in terms of demographic breadth (Luckins). Ironically, and insofar as Nespresso appeals to coffee-literate consumers, the brand owes much to Starbucks. In the two decades preceding Nespresso’s arrival, Starbucks played a significant role in refining coffee literacy around the world, gauging mass-market trends, and stirring consumer consciousness. For Nespresso, this constituted major preparatory phenomena, as its strategy (and success) since the early 2000s presupposed the coffee market that Starbucks had helped to create. According to Nespresso’s chief executive Richard Giradot, central to Nespresso’s expansion is a focus on particular cities and their coffee culture (Canning 28). In turn, it pays to take stock of how such cities developed a coffee culture amenable to Nespresso—and therein lays the brand’s debt to Starbucks. Until the last few years, and before celebrity ambassador George Clooney was enlisted in 2005, Nespresso’s marketing was driven primarily by Club members’ recommendations. At the same time, though, Nespresso insisted that Club members were coffee connoisseurs, whose knowledge and enjoyment of coffee exceeded conventional coffee offerings. In 2000, Henk Kwakman, one of Nestlé’s Coffee Specialists, explained the need for portioned coffee in terms of guaranteed perfection, one that demanding consumers would expect. “In general”, he reasoned, “people who really like espresso coffee are very much more quality driven. When you consider such an intense taste experience, the quality is very important. If the espresso is slightly off quality, the connoisseur notices this immediately” (quoted in Butler 50). What matters here is how this corps of connoisseurs grew to a scale big enough to sustain and strengthen the Nespresso system, in the absence of a robust marketing or educative drive by Nespresso (until very recently). Put simply, the brand’s ascent was aided by Starbucks, specifically by the latter’s success in changing the mainstream coffee market during the 1990s. In establishing such a strong transnational presence, Starbucks challenged smaller, competing brands to define themselves with more clarity and conviction. Indeed, working with data that identified just 200 freestanding coffee houses in the US prior to 1990 compared to 14,000 in 2003, Kjeldgaard and Ostberg go so far as to state that: “Put bluntly, in the US there was no local coffee consumptionscape prior to Starbucks” (Kjeldgaard and Ostberg 176). Starbucks effectively redefined the coffee world for mainstream consumers in ways that were directly beneficial for Nespresso. Starbucks: Coffee as Ambience, Experience, and Cultural Capital While visitors to Nespresso boutiques can sample the coffee, with highly trained baristas and staff on site to explain the Nespresso system, in the main there are few concessions to the conventional café experience. Primarily, these boutiques function as material spaces for existing Club members to stock up on capsules, and therefore they complement the Nespresso system with a suitably streamlined space: efficient, stylish and conspicuously upmarket. Outside at least one Sydney boutique for instance (Bondi Junction, in the fashionable eastern suburbs), visitors enter through a club-style cordon, something usually associated with exclusive bars or hotels. This demarcates the boutique from neighbouring coffee chains, and signals Nespresso’s claim to more privileged patrons. This strategy though, the cultivation of a particular customer through aesthetic design and subtle flattery, is not unique. For decades, Starbucks also contrived a “special” coffee experience. Moreover, while the Starbucks model strikes a very different sensorial chord to that of Nespresso (in terms of décor, target consumer and so on) it effectively groomed and prepped everyday coffee drinkers to a level of relative self-sufficiency and expertise—and therein is the link between Starbucks’s mass-marketed approach and Nespresso’s timely arrival. Starbucks opened its first store in 1971, in Seattle. Three partners founded it: Jerry Baldwin and Zev Siegl, both teachers, and Gordon Bowker, a writer. In 1982, as they opened their sixth Seattle store, they were joined by Howard Schultz. Schultz’s trip to Italy the following year led to an entrepreneurial epiphany to which he now attributes Starbucks’s success. Inspired by how cafés in Italy, particularly the espresso bars in Milan, were vibrant social hubs, Schultz returned to the US with a newfound sensitivity to ambience and attitude. In 1987, Schultz bought Starbucks outright and stated his business philosophy thus: “We aren’t in the coffee business, serving people. We are in the people business, serving coffee” (quoted in Ruzich 432). This was articulated most clearly in how Schultz structured Starbucks as the ultimate “third place”, a welcoming amalgam of aromas, music, furniture, textures, literature and free WiFi. This transformed the café experience twofold. First, sensory overload masked the dull hom*ogeny of a global chain with an air of warm, comforting domesticity—an inviting, everyday “home away from home.” To this end, in 1994, Schultz enlisted interior design “mastermind” Wright Massey; with his team of 45 designers, Massey created the chain’s decor blueprint, an “oasis for contemplation” (quoted in Scerri 60). At the same time though, and second, Starbucks promoted a revisionist, airbrushed version of how the coffee was produced. Patrons could see and smell the freshly roasted beans, and read about their places of origin in the free pamphlets. In this way, Starbucks merged the exotic and the cosmopolitan. The global supply chain underwent an image makeover, helped by a “new” vocabulary that familiarised its coffee drinkers with the diversity and complexity of coffee, and such terms as aroma, acidity, body and flavour. This strategy had a decisive impact on the coffee market, first in the US and then elsewhere: Starbucks oversaw a significant expansion in coffee consumption, both quantitatively and qualitatively. In the decades following the Second World War, coffee consumption in the US reached a plateau. Moreover, as Steven Topik points out, the rise of this type of coffee connoisseurship actually coincided with declining per capita consumption of coffee in the US—so the social status attributed to specialised knowledge of coffee “saved” the market: “Coffee’s rise as a sign of distinction and connoisseurship meant its appeal was no longer just its photoactive role as a stimulant nor the democratic sociability of the coffee shop” (Topik 100). Starbucks’s singular triumph was to not only convert non-coffee drinkers, but also train them to a level of relative sophistication. The average “cup o’ Joe” thus gave way to the latte, cappuccino, macchiato and more, and a world of coffee hitherto beyond (perhaps above) the average American consumer became both regular and routine. By 2003, Starbucks’s revenue was US $4.1 billion, and by 2012 there were almost 20,000 stores in 58 countries. As an idealised “third place,” Starbucks functioned as a welcoming haven that flattened out and muted the realities of global trade. The variety of beans on offer (Arabica, Latin American, speciality single origin and so on) bespoke a generous and bountiful modernity; while brochures schooled patrons in the nuances of terroir, an appreciation for origin and distinctiveness that encoded cultural capital. This positioned Starbucks within a happy narrative of the coffee economy, and drew patrons into this story by flattering their consumer choices. Against the generic sameness of supermarket options, Starbucks promised distinction, in Pierre Bourdieu’s sense of the term, and diversity in its coffee offerings. For Greg Dickinson, the Starbucks experience—the scent of the beans, the sound of the grinders, the taste of the coffees—negated the abstractions of postmodern, global trade: by sensory seduction, patrons connected with something real, authentic and material. At the same time, Starbucks professed commitment to the “triple bottom line” (Savitz), the corporate mantra that has morphed into virtual orthodoxy over the last fifteen years. This was hardly surprising; companies that trade in food staples typically grown in developing regions (coffee, tea, sugar, and coffee) felt the “political-aesthetic problematization of food” (Sassatelli and Davolio). This saw increasingly cognisant consumers trying to reconcile the pleasures of consumption with environmental and human responsibilities. The “triple bottom line” approach, which ostensibly promotes best business practice for people, profits and the planet, was folded into Starbucks’s marketing. The company heavily promoted its range of civic engagement, such as donations to nurses’ associations, literacy programs, clean water programs, and fair dealings with its coffee growers in developing societies (Simon). This bode well for its target market. As Constance M. Ruch has argued, Starbucks sought the burgeoning and lucrative “bobo” class, a term Ruch borrows from David Brooks. A portmanteau of “bourgeois bohemians,” “bobo” describes the educated elite that seeks the ambience and experience of a counter-cultural aesthetic, but without the political commitment. Until the last few years, it seemed Starbucks had successfully grafted this cultural zeitgeist onto its “third place.” Ironically, the scale and scope of the brand’s success has meant that Starbucks’s claim to an ethical agenda draws frequent and often fierce attack. As a global behemoth, Starbucks evolved into an iconic symbol of advanced consumer culture. For those critical of how such brands overwhelm smaller, more local competition, the brand is now synonymous for insidious, unstoppable retail spread. This in turn renders Starbucks vulnerable to protests that, despite its gestures towards sustainability (human and environmental), and by virtue of its size, ubiquity and ultimately conservative philosophy, it has lost whatever cachet or charm it supposedly once had. As Bryant Simon argues, in co-opting the language of ethical practice within an ultimately corporatist context, Starbucks only ever appealed to a modest form of altruism; not just in terms of the funds committed to worthy causes, but also to move thorny issues to “the most non-contentious middle-ground,” lest conservative customers felt alienated (Simon 162). Yet, having flagged itself as an ethical brand, Starbucks became an even bigger target for anti-corporatist sentiment, and the charge that, as a multinational giant, it remained complicit in (and one of the biggest benefactors of) a starkly inequitable and asymmetric global trade. It remains a major presence in the world coffee market, and arguably the most famous of the coffee chains. Over the last decade though, the speed and intensity with which Nespresso has grown, coupled with its atypical approach to consumer engagement, suggests that, in terms of brand equity, it now offers a more compelling point of difference than Starbucks. Brand “Me” Insofar as the Nespresso system depends on a consumer market versed in the intricacies of quality coffee, Starbucks can be at least partly credited for nurturing a more refined palate amongst everyday coffee drinkers. Yet while Starbucks courted the “average” consumer in its quest for market control, saturating the suburban landscape with thousands of virtually indistinguishable stores, Nespresso marks a very different sensibility. Put simply, Nespresso inverts the logic of a coffee house as a “third place,” and patrons are drawn not to socialise and relax but to pursue their own highly individualised interests. The difference with Starbucks could not be starker. One visitor to the Bloomingdale boutique (in New York’s fashionable Soho district) described it as having “the feel of Switzerland rather than Seattle. Instead of velvet sofas and comfy music, it has hard surfaces, bright colours and European hostesses” (Gapper 9). By creating a system that narrows the gap between production and consumption, to the point where Nespresso boutiques advertise the coffee brand but do not promote on-site coffee drinking, the boutiques are blithely indifferent to the historical, romanticised image of the coffee house as a meeting place. The result is a coffee experience that exploits the sophistication and vanity of aspirational consumers, but ignores the socialising scaffold by which coffee houses historically and perhaps naively made some claim to community building. If anything, Nespresso restricts patrons’ contemplative field: they consider only their relationships to the brand. In turn, Nespresso offers the ultimate expression of contemporary consumer capitalism, a hyper-individual experience for a hyper-modern age. By developing a global brand that is both luxurious and niche, Nespresso became “the Louis Vuitton of coffee” (Betts 14). Where Starbucks pursued retail ubiquity, Nespresso targets affluent, upmarket cities. As chief executive Richard Giradot put it, with no hint of embarrassment or apology: “If you take China, for example, we are not speaking about China, we are speaking about Shanghai, Hong Kong, Beijing because you will not sell our concept in the middle of nowhere in China” (quoted in Canning 28). For this reason, while Europe accounts for 90 per cent of Nespresso sales (Betts 15), its forays into the Americas, Asia and Australasia invariably spotlights cities that are already iconic or emerging economic hubs. The first boutique in Latin America, for instance, was opened in Jardins, a wealthy suburb in Sao Paulo, Brazil. In Nespresso, Nestlé has popularised a coffee experience neatly suited to contemporary consumer trends: Club members inhabit a branded world as hermetically sealed as the aluminium pods they purchase and consume. Besides the Club’s phone, fax and online distribution channels, pods can only be bought at the boutiques, which minimise even the potential for serendipitous mingling. The baristas are there primarily for product demonstrations, whilst highly trained staff recite the machines’ strengths (be they in design or utility), or information about the actual coffees. For Club members, the boutique service is merely the human extension of Nespresso’s online presence, whereby product information becomes increasingly tailored to increasingly individualised tastes. In the boutique, this emphasis on the individual is sold in terms of elegance, expedience and privilege. Nespresso boasts that over 70 per cent of its workforce is “customer facing,” sharing their passion and knowledge with Club members. Having already received and processed the product information (through the website, boutique staff, and promotional brochures), Club members need not do anything more than purchase their pods. In some of the more recently opened boutiques, such as in Paris-Madeleine, there is even an Exclusive Room where only Club members may enter—curious tourists (or potential members) are kept out. Club members though can select their preferred Grands Crus and checkout automatically, thanks to RFID (radio frequency identification) technology inserted in the capsule sleeves. So, where Starbucks exudes an inclusive, hearth-like hospitality, the Nespresso Club appears more like a pampered clique, albeit a growing one. As described in the Financial Times, “combine the reception desk of a designer hotel with an expensive fashion display and you get some idea what a Nespresso ‘coffee boutique’ is like” (Wiggins and Simonian 10). Conclusion Instead of sociability, Nespresso puts a premium on exclusivity and the knowledge gained through that exclusive experience. The more Club members know about the coffee, the faster and more individualised (and “therefore” better) the transaction they have with the Nespresso brand. This in turn confirms Zygmunt Bauman’s contention that, in a consumer society, being free to choose requires competence: “Freedom to choose does not mean that all choices are right—there are good and bad choices, better and worse choices. The kind of choice eventually made is the evidence of competence or its lack” (Bauman 43-44). Consumption here becomes an endless process of self-fashioning through commodities; a process Eva Illouz considers “all the more strenuous when the market recruits the consumer through the sysiphian exercise of his/her freedom to choose who he/she is” (Illouz 392). In a status-based setting, the more finely graded the differences between commodities (various places of origin, blends, intensities, and so on), the harder the consumer works to stay ahead—which means to be sufficiently informed. Consumers are locked in a game of constant reassurance, to show upward mobility to both themselves and society. For all that, and like Starbucks, Nespresso shows some signs of corporate social responsibility. In 2009, the company announced its “Ecolaboration” initiative, a series of eco-friendly targets for 2013. By then, Nespresso aims to: source 80 per cent of its coffee through Sustainable Quality Programs and Rainforest Alliance Certified farms; triple its capacity to recycle used capsules to 75 per cent; and reduce the overall carbon footprint required to produce each cup of Nespresso by 20 per cent (Nespresso). This information is conveyed through the brand’s website, press releases and brochures. However, since such endeavours are now de rigueur for many brands, it does not register as particularly innovative, progressive or challenging: it is an unexceptional (even expected) part of contemporary mainstream marketing. Indeed, the use of actor George Clooney as Nespresso’s brand ambassador since 2005 shows shrewd appraisal of consumers’ political and cultural sensibilities. As a celebrity who splits his time between Hollywood and Lake Como in Italy, Clooney embodies the glamorous, cosmopolitan lifestyle that Nespresso signifies. However, as an actor famous for backing political and humanitarian causes (having raised awareness for crises in Darfur and Haiti, and backing calls for the legalisation of same-sex marriage), Clooney’s meanings extend beyond cinema: as a celebrity, he is multi-coded. Through its association with Clooney, and his fusion of star power and worldly sophistication, the brand is imbued with semantic latitude. Still, in the television commercials in which Clooney appears for Nespresso, his role as the Hollywood heartthrob invariably overshadows that of the political campaigner. These commercials actually pivot on Clooney’s romantic appeal, an appeal which is ironically upstaged in the commercials by something even more seductive: Nespresso coffee. References Bauman, Zygmunt. “Collateral Casualties of Consumerism.” Journal of Consumer Culture 7.1 (2007): 25–56. Betts, Paul. “Nestlé Refines its Arsenal in the Luxury Coffee War.” Financial Times 28 Apr. (2010): 14. Bourdieu, Pierre. Distinction: A Social Critique of the Judgement of Taste. Cambridge: Harvard University Press, 1984. Butler, Reg. “The Nespresso Route to a Perfect Espresso.” Tea & Coffee Trade Journal 172.4 (2000): 50. Canning, Simon. “Nespresso Taps a Cultural Thirst.” The Australian 26 Oct. (2009): 28. Dickinson, Greg. “Joe’s Rhetoric: Finding Authenticity at Starbucks.” Rhetoric Society Quarterly 32.4 (2002): 5–27. Gapper, John. “Lessons from Nestlé’s Coffee Break.” Financial Times 3 Jan. (2008): 9. Gaudio, Rudolf P. “Coffeetalk: StarbucksTM and the Commercialization of Casual Conversation.” Language in Society 32.5 (2003): 659–91. Habermas, Jürgen. The Structural Transformation of the Public Sphere: An Inquiry into a Category of Bourgeois Society. Cambridge: MIT Press, 1962. Illouz, Eva. “Emotions, Imagination and Consumption: A New Research Agenda.” Journal of Consumer Culture 9 (2009): 377–413. Karababa, EmInegül, and GüIIz Ger. “Early Modern Ottoman Coffehouse Culture and the Formation of the Consumer Subject." Journal of Consumer Research 37.5 (2011): 737–60 Kjeldgaard, Dannie, and Jacob Ostberg. “Coffee Grounds and the Global Cup: Global Consumer Culture in Scandinavia”. Consumption, Markets and Culture 10.2 (2007): 175–87. Kleinman, Sharon S. “Café Culture in France and the United States: A Comparative Ethnographic Study of the Use of Mobile Information and Communication Technologies.” Atlantic Journal of Communication 14.4 (2006): 191–210. Luckins, Tanja. “Flavoursome Scraps of Conversation: Talking and Hearing the Cosmopolitan City, 1900s–1960s.” History Australia 7.2 (2010): 31.1–31.16. Markides, Constantinos C. “A Dynamic View of Strategy.” Sloan Management Review 40.3 (1999): 55. Nespresso. “Ecolaboration Initiative Directs Nespresso to Sustainable Success.” Nespresso Media Centre 2009. 13 Dec. 2011. ‹http://www.nespresso.com›. O’Brien, Mary. “A Shot at the Big Time.” The Age 21 Jun. (2011): 16. Oldenburg, Ray. The Great Good Place: Cafés, Coffee Shops, Community Centers, Beauty Parlors, General Stores, Bars, Hangouts, and How They Get You Through the Day. New York: Paragon House, 1989. Purpura, Linda. “New Espresso Machines to Tempt the Palate.” The Weekly Home Furnishings Newspaper 3 May (1993): 116. Purpura, Linda. “Espresso: Grace under Pressure.” The Weekly Home Furnishings Newspaper 16 Dec. (1991): 88. Reguly, Eric. “No Ordinary Joe: Nestlé Pulls off Caffeine Coup.” The Globe and Mail 6 Jul. (2009): B1. Ruzich, Constance M. “For the Love of Joe: The Language of Starbucks.” The Journal of Popular Culture 41.3 (2008): 428–42. Sassatelli, Roberta, and Federica Davolio. “Consumption, Pleasure and Politics: Slow Food and the Politico-aesthetic Problematization of Food.” Journal of Consumer Culture 10.2 (2010): 202–32. Savitz, Andrew W. The Triple Bottom Line: How Today’s Best-run Companies are Achieving Economic, Social, and Environmental Success—And How You Can Too. San Francisco: Jossey-Bass, 2006. Scerri, Andrew. “Triple Bottom-line Capitalism and the ‘Third Place’.” Arena Journal 20 (2002/03): 57–65. Simon, Bryant. “Not Going to Starbucks: Boycotts and the Out-sourcing of Politics in the Branded World.” Journal of Consumer Culture 11.2 (2011): 145–67. Simonian, Haig. “Nestlé Doubles Nespresso Output.” FT.Com 10 Jun. (2009). 2 Feb. 2012 ‹http://www.ft.com/cms/s/0/0dcc4e44-55ea-11de-ab7e-00144feabdc0.html#axzz1tgMPBgtV›. Topik, Steven. “Coffee as a Social Drug.” Cultural Critique 71 (2009): 81–106. Wiggins, Jenny, and Haig Simonian. “How to Serve a Bespoke Cup of Coffee.” Financial Times 3 Apr. (2007): 10.

APA, Harvard, Vancouver, ISO, and other styles

We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography
Journal articles: 'Multi-user content distribution' – Grafiati (2024)
Top Articles
Most Popular Indian Web Series of 2022 (So Far)
Year-end Roundup: Best Hindi web series of 2022
Get maximum control with JCB LiveLink | JCB.com
Gasbuddy Costco Hawthorne
Cpcon Protection Priority Focus
Ups Cc Center
How do you evaluate cash flow?
American Airlines Companion Certificate Blackout Dates 2023
Astral Ore Calamity
Practice Assist.conduit.optum
Telegram Voyeur
Inside the Rise and Fall of Toys ‘R’ Us | HISTORY
Food Universe Near Me Circular
Housing Intranet Unt
Craigslist Folding Table
Ice Quartz Osrs
Craigslist For Sale By Owner Chillicothe Ohio
Cal Poly San Luis Obispo Catalog
New from Simply So Good - Cherry Apricot Slab Pie
Where Is Gobblestone Castle
Rural King Credit Card Minimum Credit Score
Craigslist Pikeville Tn
Zwei-Faktor-Authentifizierung (2FA) für Ihre HubSpot-Anmeldung einrichten
Yesmovie.nm
5162635626
Fast X Showtimes Near Evo Cinemas Creekside 14
Abby's Caribbean Cafe
Missoula Jail Releases
Foreign Languages Building
Sealy Posturepedic Carver 11 Firm
Oldgamesshelf
Trivago Hotels Austin
Bank Of America Operating Hours Today
Generation Zero beginner’s guide: six indispensable tips to help you survive the robot revolution
Are Huntington Home Candles Toxic
10-5 Study Guide And Intervention Tangents Answer Key
Codex - Chaos Space Marines 9th Ed (Solo Reglas) - PDFCOFFEE.COM
9 best hotels in Atlanta to check out in 2023 - The Points Guy
Grupos De Cp Telegram
Craigslist Farm And Garden Yakima
Ice Hockey Dboard
Www.craiglist.com San Antonio
Actors In Sleep Number Commercial
Infinity Pool Showtimes Near Maya Cinemas Bakersfield
Its Arrival May Be Signaled By A Ding
11526 Lake Ave Cleveland Oh 44102
421 West 202Nd Street
Egg Inc Ultimate Walkthrough & Game Guide - Talk Android
AI Packgod Roast Generator [100% Free, No Login Required]
Stuckey Furniture
Walb Game Forecast
Classic Forbidden Romance: 6 Reasons To Watch C-Drama “Love Between Fairy And Devil”
Latest Posts
Article information

Author: Nicola Considine CPA

Last Updated:

Views: 5428

Rating: 4.9 / 5 (69 voted)

Reviews: 92% of readers found this page helpful

Author information

Name: Nicola Considine CPA

Birthday: 1993-02-26

Address: 3809 Clinton Inlet, East Aleisha, UT 46318-2392

Phone: +2681424145499

Job: Government Technician

Hobby: Calligraphy, Lego building, Worldbuilding, Shooting, Bird watching, Shopping, Cooking

Introduction: My name is Nicola Considine CPA, I am a determined, witty, powerful, brainy, open, smiling, proud person who loves writing and wants to share my knowledge and understanding with you.