Dec 13, 2010

Creating order out of chaos: Managing knowledge in a globalized “Ba”

American writer and futurist Alvin Toffler (1928- ) once said knowledge is the most democratic source of power. I’d like to start my review of this course from Toffler’s words. Undoubtedly, we human have been aware of the incomparable power of knowledge and intelligence since a long time ago: from ancient philosophers to scientific revolution pioneers to modernists and then postmodernists, people created varied phrases conveying the same meaning, “knowledge is power” (some of us might have read similar words even in the Bible). For me, knowledge turns into power only when it’s applied into practice. Here is an story about Albert Einstein and the Atomic Bomb, we can see how powerful knowledge is when it is being used in certain fields. And there is something else on my mind when I dive into the whole knowledge management theories and practices, I’m trying to jump out of the circle and check the knowledge-human-power issue in a more objective and macroscopic manner. From my perspective, just like the any other power substance in the world, knowledge, the so-called “most democratic (I would like to add the word “friendly” here) source of power,” requires a highly systematic and effective mechanism in its control and uses, that is why we need to devote our time and energy in exploring knowledge management – as a matter of fact, managing knowledge is also a crucial knowledge and sometimes it’s even more important than the knowledge itself, especially in this information overload era.

We can’t analyze this knowledge-human-power issue without re-discussing the definition of “Ba.” Please allow me to show off my very poor third language knowledge a little bit, this word “Ba” is written as “場” in Japanese, means location, field, or market. In KM context, it is defined as the shared context for knowledge creation . At present and in the predictable future, the Ba, in which knowledge creation carries out its SECI process (Socialization, Externalization, Combination, and Internalization), is becoming increasingly globalized. This trend is part of the macro process of globalization in which economies, cultures, societies etc. are integrated through a global network . A globalized Ba, connect with what we discussed in Dr. Levy’s last lecture, is the foundation of Dr. Levy’s prediction of the future knowledge management revolution, which will be able to realize a semantic computing/addressing system(s) that allows human to create and manage knowledge in a universally shared Ba.

I also want to share some of my viewpoints concerning this “KM revolution.” Generally I agree with Dr. Levy’s prediction of the development direction for KM tools and measures of our human society in the coming generations, and this vision is definitely exciting and promising. In the meanwhile, one can feel a strong sense of cosmopolitanism in this tendency and it obviously pushes the trend of globalization onto a new level (Say, if we are able to create and share knowledge exceeding the boundaries of language, culture, and ethnic groups, what a super species we human will be…). My concern here is the degree of participation (or degree of acceptance) of this universally shared semantic network and its implications on human societies.

Yes knowledge is the most democratic source of power, so is useful information, but maybe they are not as democratic as we assumed. Let’s talk about the social network service tycoon Facebook. As statistics indicates, there are more than 500 million active users of Facebook today who in sum spend over 700 billion minutes per month on it . It’s not difficult to calculate that the users of Facebook have amounted to around 8% of the earth’s population within the past 6 years since its foundation, which is undoubtedly an amazing growth. But on the other hand it also means that there are 92% of the people on earth who are not using Facebook. And it is reasonable to predict that it will take way more than six years for Facebook to obtain another 500 million active users around the world. The reason is simple: there are a lot of factors in real world setting up invisible boundaries to the open access of information and knowledge, for instance, lack of information communication infrastructure, lack of education to use the tools, or too much censorship from the government. Here is the question: is it possible to realize a KM network which connects all human together and functions as a super-virtual-world above the real present world? Just like a cosmopolitan online community?

Our life is so long and so short and that makes us eager to know what the future will be like. I have a video here depicting the future living with the uses of technology about 50 years later, really exicting. So supposedly, at some point in future, a vast majority of the people in the world will possess the knowledge and infrastructure and time and willingness to participate in an universal knowledge creation and exchange system (a cosmopolitan online community), I’m afraid two things will happen before that: 1. the dominant capitalist economic system will no longer exist and global economy is running in a totally new model; 2. there will not be enough energy to fuel the world since every country is so developed. The energy crisis is one problem, not to mention the implications on philosophy, culture, education, political systems and ideologies all over the world. Thus my viewpoint over this issue is, a giant semantic network/system will occur in future and it may cover say 30% to 40% of the human beings on the earth, which will be revolutionary enough, but not a majority (say 90% or more) of the people. I never doubt that computing technologies and human wisdoms will realize a knowledge creating and sharing platform with no boundaries and limitations, but the ones who have access to it will always be a certain group of people compared to the whole human species in world.

We are in the center of a quiet knowledge revolution, and what we are thriving to accomplish is to create orders out of the overwhelming chaos in a highly globalized Ba. I would like to call this trend the globalization of knowledge management – as the personal knowledge management and organizational knowledge management axis and strategies are increasingly relying on the feature of globalization. Thanks to technology, we can have the chance to embrace and explore the world on a level that our predecessors have never dreamt about. On the other hand, in the PKM scenario, it sometimes occurs to me that today’s PKM is more like a game of “unbearable lightness” – the way we are dealing with the knowledge chaos (via the tools of social media and digital technology and so forth) seems always fall behind the pace of the increase of knowledge and information, and we are kind of stuck in the process of receiving, filtering, gathering and accepting new stuff rather than reflecting and thinking about them. In other words, we are struggling to create orders in the oceans of chaotic data, but not everyone in this KM storm can survive and take advantage of it.

Little, S. & Ray, T. (2005). Managing Knowledge: An Essential Reader. SAGE Publications. p. 25
Little, S. & Ray, T. (2005). Managing Knowledge: An Essential Reader. SAGE Publications. p. 26
Bhagwati, J. (2004). In Defense of Globalization. Oxford, New York: Oxford University Press.

Oct 24, 2010

Media Convergence, Audience Divergence

Media Convergence, Audience Divergence – Some of my reflections on the book Convergence Culture: Where Old and New Media Collide by Henry Jenkins (2006)

Jing Ke
Oct, 2010
Course Title: Knowledge Management

Content Summary of the Book
In short, the book Convergence Culture tackles with cultural changes and social dynamics emerged from the use of new media. As the author delineates in the introduction, this book is about the relationship between three concepts – media convergence, participatory culture, and collective intelligence (p. 2). The word “convergence” in the context of this book means: 1). the flow of content across multiple media platforms; 2). the cooperation between multiple media industries; and 3). the migratory behaviour of media audiences today. According to Jenkins, in the process of convergence, the circulation of media content across different media systems, even different nations and social systems, depends heavily on consumers’ active participation.
As Jenkins points out, the convergence also represents a cultural shift as consumers are encouraged to seek out new information and make connections among dispersed media content. In contrast to traditional media ecosystem where the producer and the consumer are playing separate roles and staying on different levels in information communication process, the relationship between media producers and media consumers is changing dramatically in the transformation of digitization: though in varied degrees, they both become interactive participants in media activities, alone with the phenomenon is the fluid power dynamics – this backbones what the author called participatory culture.
One outcome of this participatory media behaviour is the collective consumption of media products, and the emergence of collective intelligence, which is derived from the media consumers’ creative and collaborative interactions with the media and can be seen as an alternative source of media power. With this being said, Jenkins explores in this book how collective meaning-making within popular culture is starting to “change the way religion, education, law, politics, advertising, and even military operate” (p. 4) in contemporary world with a couple of cases in each chapter.
This book also discusses the relationship between technology and human mind in terms of media convergence. Jenkins holds that although technological innovations in the area of information communication have huge implications in our social structure and social behaviour and can always bring together multiple media functions in one device, the main boost of convergence, however, largely lies in the human mind, more specifically, in the brains of individual consumers and their social interactions with others. In other words, Jenkins in this book highlights the cultural and social meanings of convergence rather than simply physical means of media and information communication.
Abstract of each chapter:
Chapter 1 of this book examines the phenomenon of Survivor spoilers in the consumption activities of reality television. This group of consumers is read as a vivid example of a knowledge community whose members work together to forge new knowledge and thus bring about collective intelligence. As a representative of contemporary media consumption, their knowledge becomes an increasingly noticeable power in the age of media convergence.
Chapter 2 focuses on another well-known reality television in the US: American Idol and explores how reality television is being shaped by “affective economics”, which encourages companies to blur the line between entertainment content and brand messages and invite the audience into the brand community. The ideal consumers are supposed to be active, emotionally engaged, and socially networked, thus they can carry out more active consumption as well as protect the brand integrity.
Chapter 3 examines The Matrix franchise as an example of transmedia storytelling emerged in response to media convergence. It concludes that relying on the active participation of knowledge communities, transmedia storytelling has become an indispensible motivator in the commercial success of fictional movies and similar media product.
Chapter 4 is about fan culture, which I personally understand as a subfield of participatory culture. It deals with Star War fan filmmakers and gamers and explains how they satisfy their own fantasies and desires by actively reshaping the scenarios and plots of the films.
Chapter 5 goes further into the fan culture and probes into the politics of participation in the realm of participatory culture. It represents the struggles between fan writing and media producers over the intellectual property issue and the struggles between conservative and liberal Christians over their attitudes on media convergence, globalization, and traditional authority as well as the influences on children’s education. In general, the author holds a positive viewpoint on those issues and stand on the side of participation.
Chapter 6 turns from popular culture to public culture and argues that the lines between political culture and popular culture have blurred in new media era. Giving the example of 2004 American presidential campaign, Jenkins suggests that since citizens today are more engaged in popular culture than political discourse, it is popular culture that should take the responsibility to educate the public about political importance and to make democracy more participatory. On the other hand, with the participation of citizens in the campaign activities, the candidates and parties are losing some control over this political process.
In conclusion, this is a book about convergence, collective intelligence, and participation in new media era. Jenkins tries to give us a bird’s-eye view on how convergence and participation is changing the culture, politics and economy of our society. This process can be summarized in his own words, “convergence culture represents a shift in the ways we think about our relations to media, and we are making that shift first through our relations with popular culture, but the skills we acquired through play may have implications for how we learn, work, participate in the political process, and connect with other people in the world.” (p. 23)

Personal Commentary
Written in 2006, Jenkins’s book was, to a large extent, prospective. His arguments and observations on convergence media and participatory culture turned out to be evidentially appropriate when we examine the societies in the context of developed countries today. However, we have to understand that one can hardly give an all-around portrait on how media convergence changes our society in the era of digital revolution in one single book, and human culture is something that has the capacity to introspect, evolve, and regenerate from time to time. With that being said, I personally hold the viewpoint that Jenkins in this book emphasized too much on the convergence side of media and society but overlooked the divergence side of this issue.
Doubtlessly, convergence is the trend of today’s world: we are talking about globalization and integration in every context of human society; we are building up organizations like EU and APEC from continent to continent; and we are witnessing a growing number of tycoons gaining significant control over the media from which we receive information. The tide of convergence has greatly influenced and reshaped our society in terms of media industry, social institution, economy, culture, communication technology and so forth, pushing them going on the track of convergence as well.
However, here I’d like to turn to the other side and question what the driving forces of this media convergence are. Jenkins in this book pointed out clearly that it is the innovation of technology (i.e. digitalization and Internet) and the human nature (we use media as natural choices) that mainly motivated the trend of convergence. Jenkins was right, definitely, but I’d like to highlight something he also discussed in the book: the divergence of audience, which is also, from my perspective, a major motivation which shifts the model of media communication from mass communication to segmented communication and finally personalized communication in the past decades.
In this book, Jenkins had actually mentioned various examples on audience divergence but didn’t go much further into this discourse. To name a few, the Survivor spoilers and the grassroots fan communities, which are both small groups of audience consuming and producing media product according to their own taste and favour. These groups are just a small part of the big picture of audience segmentation in digitalized era, and in my opinion, the convergence of media platforms, technologies and even the whole media industries is an essential way to maintain audience’s attention in the situation that they have in fact been largely diversified. By means of media convergence, today’s audiences, especially young audiences, are able to keep participating in media consumption via multiple platforms and channels, and in return, they contribute their knowledge and intelligence into the media content production, making the participatory culture possible.
As it’s been illustrated in both the cases of Survivor and American Idol, in today’s media context, relations between media producers and consumers are breaking down as consumers are seeking to participate in the generating of media product – this reminds me the term “prosumer”, which emerged in early 1980s and means the fusion of producer and consumer (or the professional and the consumer). We are living in a commercialized era - the media system is typically commercialized. Thus in my opinion, the surge of “prosumer” in the field of media practice is also largely motivated by the phenomenon of audience divergence, more accurately, by media group’s desire to maximize profit through the media product, by-product and advertising industry. Since in the context of digitization, the only way for media groups to survive and keep a profitable production is to fulfill the highly diversified audience’s taste, to which the best mean is to allow and encourage the audience to join in the production chain and realize personalized product.
Besides exploring the transition in consumer’s behaviour in media convergence and the implications of such transition, in this book Jenkins also points out the power shift yielded by the media convergence and cultural shift, which in my opinion concords with the postmodern way of thinking, say, typically, with Foucault’s statement on power/knowledge. The wide spread of computing technology and digitalized information communication witnessed the collapse of a centralized power and hegemony in our society. With this power collapse, mass production in society breaks into pieces, citizens gain more rights to speak in the political process, and segmented audiences all over the world are searching media product according to their own favour and in the way they feel most comfortable with.
To sum up, in my opinion, we can divide the outcome of digitization into two sides: the convergence side and the divergence side. Convergence is about ICTs, media industries, media product and other services they provide while divergence is about the audience, their needs and tastes, and the media content they are consuming and producing. They are like two sides of a coin; they co-exist and interact with each other in the context of digitization.
Besides that, Jenkins raised a question in his book which I feel worth our noticing: he questioned whether the changes brought about by convergence opened new opportunities for expression or expanded the power of big media. This question requires careful consideration and is kind of alarming. My answer is, though power shifts drastically, the changes are actually expanding the power of big media but this expansion is disguised under the image of new opportunities for expression and decentralization of media power. – Anyway, time will tell to what direction we are going and whether it is a blessing or a curse.

Aug 12, 2010

Legal opinion of a fictional scenario

Jing Ke
August, 2010
Course Title: Law and the Challenges of New Media

[Review of the facts and arguments omitted]

This case is about copyright infringement on a video sharing website which has a world-wide influence. Canada’s Copyright Act has clearly affirmed that “copyright” covers “the sole right […] to communicate the work to the public by telecommunication […] and to authorize any such acts.” The term “telecommunication” is defined as “any transmission of signs, signals, writing, images or sounds or intelligence of any nature by wire, radio, visual, optical or other electromagnetic system.” It is undeniably the fact that, during 2005 to 2007, before YouTube and Google implemented a filtering system and other content protection measures, YouTube users commonly uploaded unauthorized clips of TV shows and movies to the website. The uploading of unauthorized materials by YouTube users is a communication falls within the declaration of Copyright Act, and as the copyright owner of those clips, BEC’s right and interest has been infringed. The question being discussed in this legal opinion is: Should Google and its YouTube property be responsible and to what extent they are responsible for the act of copyright infringement in this case? Based on the arguments uttered by BEC and Google, there are two sub-questions to be interpreted:

1. Does Google or YouTube has the validity of being exempted from the Copyright Act?
2. Is YouTube “looking the other way” when clips from BEC’s movie and TV productions were once plentiful on YouTube website?

Based on the facts of this case and the existing legal grounds, I personally hold the opinion that Google and its YouTube property is responsible for the act of copyright infringement on YouTube website in this case. As a major video-sharing website, YouTube functions more than an innocent third-party intermediary and thus cannot be protected from being held responsible for copyright infringement committed by its users. As to the second sub-question, it is impractical to have a clear-cut evaluation over the subjective inclinations of YouTube when infringing materials burst. However, it is not the aim of this case to determine whether or not YouTube was “looking the other way” when infringement happened, and the answer of this question does not have a direct affect on the judgment of this case.

There is no doubt that the expansion of Internet in contemporary society has created serious obstacles to the protection of copyright, since current communication technology (i.e. Internet based file sharing such as P2P technology) makes it possible to exchange and transfer copyright materials worldwide among large number of people even in a few seconds. Facing the situation, current legal system is struggling to follow the pace of technology. The Theberge case [2002] and CCH v. Law Society case [2004] clearly manifest that the Supreme Court of Canada has described the Copyright Act as providing “a balance between promoting the public interest in the encouragement and dissemination of works of art and intellect and obtaining a just reward for the creator (or, more accurately, to prevent someone other than the copyright owner from appropriating whatever benefits may be generated)”. As Sharlow J.A. writes for the SOCAN v. CAIP case [2004]:

The capacity of the Internet to disseminate works of the art and intellect is one of the great innovations of the information age. Its use should be facilitated rather than discouraged, but this should not be done unfairly at the expense of those who created the works of arts and intellect in the first place.

Section 2.4(1)(b) of the Copyright Act provides that participants in a telecommunication who only provide “the means of telecommunication necessary” are deemed not to be communicators. The “means” include all software connection equipment, connectivity services, hosting and other facilities and service without which communication could not occur. Such distinction between content supplier or obtainer of the Internet and the infrastructure of the Internet reveals the Parliament’s encouragement of the use of new communication technology in balance of the protection over copyright owners. In SOCAN v. CAIP case, Internet service providers such as Bell Canada are exempted from the copyright liability because they limit themselves to a conduit and part of the content neutral infrastructure of the Internet, thus they fall within the protection of Section 2.4(1)(b) of the Copyright Act.

However, facts are different in this case. Google, more specifically, its YouTube website, is a video-sharing website based on Adobe Flash Video technology and on which users can upload, share, and view videos worldwide. Though it provides a “platform” to share all kinds of videos, the role it plays in the telecommunication activity is different from that of the ISPs such as Bell Canada. From my perspective, YouTube functions like an entity of new media on the Internet more than merely a conduit or infrastructure. Here a simple example, suppose a school boy has used Bell Canada’s service to connect to the YouTube host server and successfully watched unauthorized three seasons of The Big Bang Theory on YouTube website, it is obviously improper to equalize the role Bell Canada and YouTube play here. Bell Canada indeed provides the physical means of communication, while YouTube is providing the videos directly and obtains profit from such activity as a commercial website.

Furthermore, as an influential medium based on Internet communication, YouTube has a responsibility to support the legal and moral system of our society and to prevent content like violence, racism, terrorism, genocide and the like as well as piracy content exist on the website. In this case, YouTube, together with the video uploader, can be viewed as a “joint content provider”. In other words, both YouTube and its users who upload the copyright infringing videos are participants in the infringement act, their responsibility in this process cannot be separated, and YouTube should be accused because it provides the platform for the infringement act and appropriates large benefits from the pirate clips.
Some would argue that YouTube can also be exempted from infringement if it proves the service it provides falls within the fair dealing defence. Section 29 of the Copyright Act provides that “Fair dealing for the purpose of research or private study does not infringe copyright.” However, facts in this case do not support the defence of fair dealing exception either. The clips of TV shows and movies are cultural product and mainly serve for the purpose of entertainment, also, BEC produce and market them in pursuit of commercial profitability. The unauthorized clips existed on YouTube website also brought YouTube huge benefits from advertising and so forth. Lastly, the dissemination of videos on YouTube website is a communication to the public. Accordingly, YouTube cannot prove its dealings with these clips are fair under s.29 of the Copyright Act.

Another argument between BEC and Google is whether or not YouTube was “looking the other way” when infringing materials were once plentiful on the website. As I stated, it is difficult to have a clear-cut evaluation over the subjective inclinations of YouTube in this case. The willful blindness of YouTube seems a reasonable deduction since every commercial organization inclines to maximize its profit, but the argument from Google is also powerful and convincing. As a matter of fact, base on present state of technology and the huge number of videos uploaded to YouTube every day, it is impractical to filter and supervise all the video content and eliminate the existence of copyright infringing materials. However, as a principle of law, a reasonable explanation of wrong-doing does not equal to an exception from the punishment. In this case, YouTube participates in the violation of Copyright Act and BEC has suffered from the economic loss, it is not determinate for the result of the case to figure out if YouTube has the intention on the infringement act or not.

In addition, the judgment of this case is not only about copyright infringement of one website, it also has some implications on the issue of proper behavior and business ethics of today’s new media. Personally speaking, our present legal system has already compromised and given enough, if not sufficient, space to the growth and uses of new technology, such as the Internet. However, there has to be a legal and moral bottom line lies in the uses. If in this case YouTube is exempted from copyright liability in its video-sharing services, it is reasonably predictable that numerous websites which provide similar services will take advantage of this judgment and expand their copyright infringing services. This goes against the “balance” Copyright Act strikes to keep and also largely disturbs existing market order. The social influence and future implications have to be considered.

Conclusively, given the facts, the legal grounds and the possible social influences, I hold the opinion that Google and its YouTube property is a participant in the copyright infringement alleged by BEC. The dealings of YouTube in this case neither falls within the protection of Section 2.4(1)(b) nor falls within Section 29 of the Copyright Act of Canada and the intention of YouTube regarding whether or not it allows or encourages the video publishing is unnecessary to identify. BEC is going to win the case.

Aug 5, 2010

Reputation, Defamation and New Media

Jing Ke
August, 2010
Course Title: Law and the Challenges of New Media

The question discussed in this essay is the challenges to today’s legal system derived from the uses of new media with regard to the issue of “reputation”. New media, in contrast to the traditional media, in this essay refers to the forms of electronic communication based on the use of computer technology, most typically the Internet. Doubtlessly, the rise of new media and digital communication technologies in the past decades has brought a wide range of challenges to modern society, since the online information publication could be fast, anonymous, global and even uncontrollable. As a matter of fact, the surge of new media, such as the Internet, has become a huge potential threat to a person’s reputation in contemporary society.

The definition of Reputation (Merriam-Webster Dictionary) is the overall quality or character as seen or judged by people in general; or a place in public esteems or regard (good name). On the other hand, Defamation is communication about a person that tends to hurt the person's reputation. In today’s new media era, a person’s online reputation is highly connected with and has large influence on the person’s reputation in real world. As Cory J. asserted in Hill v. Scientology case, the reputation of an individual (an organization as well) should to be cherished above all in a democratic society. Firstly, a good reputation is closely related to the innate worthiness and dignity of the individual. Secondly, reputation is the fundamental foundation on which people are able to interact with each other in social environment. Thirdly, it serves the important purpose of fostering a person’s self-image and sense of self-worth. Finally, it is intimately related to a person’s right of privacy. For these reasons, a person’s reputation must be protected by law, i.e. the common law of defamation.

On the other hand, when we talk about reputation and the challenges brought by new media, we should not ignore what stands on the other side of the issue: the freedom of expression. Among all the cases we read, the Hill v. Church of Scientology of Toronto Case [1995], the Newman et al. v. Halstend et al. Case [2006] and the Crookes v. Wikimedia Foundation Inc. Case [2008] clearly manifest how the whole system of common law aim to strike an appropriate balance between the twin values of reputation and freedom of expression, and how difficult it is to reach a convincing judgment over those issues.

The right of free speech, in other words, the freedom to express ideas and criticize the operation of institution and the conduct of government, is indubitably the cornerstone of a democratic society. However, no freedom is absolute and the freedom a person acquires on expression is also a “freedom governed by law”. The importance of free speech should never be over-emphasized, especially in cyberspace activities. As it is revealed in the cases, with the widely use of contemporary new media technologies, acts like defamation, libel and false allegation can quickly and completely destroy a person’s reputation, both online and in real world, thus such act should be regarded as a serious offence. Besides that, on a broader scope, the issue of reputation is concerned in any lawsuit since the reputation of everyone involved in the case is at stake.

The Crookes v. Wikimedia case is a good example of how the expansion of Internet in society brings unexpected questions to our legal system regarding reputation and defamation, i.e. is it an act of defamation when a website article includes a hyperlink to some defamatory websites. The judgment of such cases indicates a high consideration of the intentions of the article publisher, and whether there is a “publication” of the defamatory words. Based on my understanding, the act of publication can be examined in a rather objective manner while the intention of the publisher is a more subjective issue. In many cases, some extreme, biased or inappropriate expressions in cyberspace could be acceptable and allowed in order to protect freedom of expression and realize public interest, however, such expression may only be one step away to become libel, malice and other illegal act. – In other words, the difficulty and vagueness in keeping the balance between reputation and freedom of expression in today’s legal system is the main challenge brought by new media. As a matter of fact, there is a large “grey area” lies in the protection of these two fundamental human rights, and one can hardly draw a clear line in between.

The challenge of a person’s reputation in today’s web2.0 or even web3.0 area raises questions like the management and regulation of blogs, online chat rooms, bulletin boards and other public space for free expression, which is a challenge faced by the legal system globally. Take China’s “online mob” phenomenon for example, which refers to when some Internet users publish an individual or organization’s scandals (i.e. a husband having an affair with someone else, or some companies doing backstage deals in the market) on the Internet, hundreds of thousands of anonymous Internet users start to join in the publication and “attack” the person involved. They simply use their keyboards and mouse as weapons and in most of the time, the online mob can successfully find out very detailed personal information of the ones involved in the scandals (i.e. name, job, address, phone number, date of birth, family and even pictures). Plus, they also publish numerous threatening and humiliating words to the individual. As a result, the person being attacked turned out to be hurt both mentally and physically, some of them move to another city or even commit suicide under huge pressure.

Obviously, the online mob phenomenon in China has emerged as a growingly strong power in society and has large influence on people’s real life. As a kind of dangerous crowd behavior, it indeed leads to an invisible but alarming violence. Though government starts to pay more attention to online expression regulation, it seems impossible to curb the emergence of such activity. From my perspective, it is unquestionable that Internet should be free and the online freedom of expression should be protected. Such is especially important for a rapidly transforming country like China, and it will definitely help to build a more open, fair and democratic society. However, all these online public space: blogs, chat rooms, bulletin boards and so forth are just a platform for information communication based on new technologies, not a court. Though netizens have the right to express their ideas and opinions, they should not go across the line. Furthermore, numerous historical and current events have proved that the moral standard of the public (the Internet users in this case) is not always trustworthy, and crowd behavior in most of the time turned out to be irrational activities with miserable endings.

From my perspective, the protection of reputation in new media era is both a legal and moral issue. In many cases, defamation exists, damages are made, but no one could be identified to take responsibility since the online identity is fake and anonymous. Policies and laws should be further developed to have a better protection over every social member’s privacy and other interests. However, when present state of technology and legal system cannot promise a whole protection, we have to go to morality and rise the moral standard of Internet users by means of education, mass media, community and so forth. The guideline is the minority who are hurting others should be prevented but the majority’s freedom of expression should not be disturbed.

To sum up, compared with traditional media, new media such as the Internet provides a more rapid, interactive and unrestricted measure in communicating all kinds of information nation-wide and internationally. It is a technology we should take advantage of, but also necessitates a rigorous and systematic regulation over its uses. Law and legislation always develop with the pace of the times and they exist as a measure to balance all kinds of power and interest in society, including personal, public and national. The widely use of new media today questions how to fulfill the vacuum in legal system concerning the issue of reputation and defamation, and the ever-changing legislative and juridical framework of contemporary society is again in a transition. It reveals the development of our society and the development of human civilization; it also indicates the concerns and protection of human rights. The final goal of such transition is to enable all users of the new media to enjoy their own rights and freedom in the cyberspace while guarantee their uses do not infringe others’ rights and freedom, which is also the spirit of a civil society.

Jul 19, 2010

Government Regulation and New Media

Jing Ke
July, 2010
Course Title: Law and the Challenges of New Media

“The contemporary history of new media has been characterized by conflict over the role of government in regulating the development of new media technologies and their uses.”

The development of human civilization is, to some extent, the history of technological innovations and the recognition as well as utilization of them by human beings. However, it is not unusual in history that the impacts and effects of technological innovations being under-estimated due to the ignorance of the complexity of human nature. The issue of regulating and guiding the development of new technologies needs to be more carefully pondered when we focus on the field of information communication technologies, for the reason that information communication is the lifeboat of human activities and social development. Generally speaking, the functions of mass media can be summarized as informing the audience, forming public opinion, educating, entertaining, and serving the economic, cultural as well as political systems of the society, and so forth. Accordingly, considering the influential functions of mass media, the importance of government regulation and intervention over new media technologies in the practice of using them in society needs to be stressed.

As a matter of fact, the contemporary history of new media, since the 1930s radio broadcasting and later cable TV to today’s Internet, witnesses a conflict over the role of government in regulating the development of new media technologies and their uses. Take the development of radio broadcasting industry in the United States, Britain and Canada for example, as it has been depicted in Dewar’s article, though the radio broadcasting landscape was almost the same in the three nations at its infancy, distinctness emerged after years of development and heated debate arose in these nations concerning the role of government in regulating radio broadcasting activities.

The radio boom in the US in 1921-22 brought in a growing disorder to the country’s airwaves, the profit source was mainly point-to-point communication at that time, and the development of radio broadcasting was dominated by powerful corporations such as the RCA-GE-AT&T alliance. However, the increased capital cost and programming cost made the financing of radio broadcasting became a major problem in the development of this new industry. Under the influence of the rapidly expanding advertising industry and the success of AT&T’s “toll broadcasting” experiment, advertising gradually became a solution to the radio’s growing financial problem. Commercialization advanced slowly and more direct advertising began to be used. In the light of 1927 Radio Act passed by the Congress and years of market development, the American government eventually established a policy of regulation of a privately-owned system based on the commercial profitability of the medium, particularly for advertising by the early 1930s. The radio broadcasting in the US is very capitalist, following the faith of free market economy.

The radio broadcasting model of Great Britain was considerably influenced by the earlier experience of US. Since the late 1920s, British government started to adopt a sharply contrasting broadcasting policy compared with the United States. The BBC was established in 1922 as a government-sponsored company which provides radio programs to the public. As a result of the upcoming heated debate on patent control and licensing policy, the BBC’s structure was radically altered and it finally became a public corporation in 1927 in order to protect the nation’s radio broadcasting industry from foreign competition and revenue crisis. I personally hold that the shape of the British broadcasting model is also largely influenced by the nation’s political system and government structure: the parliamentary system with multi-party competition necessitates a strong state-owned broadcasting system to minimize the partisanship in radio broadcasting in order to cater public interest. As a result, in the 1930s, the British government eventually created a completely state-owned system based on a concept of radio as a “public service”. The radio broadcasting system of UK is public funded and has a low degree of commercialization.

Learned from the already formed British and US pattern and their strengths and weaknesses, Canadian Federal Government intended to draw a balance between public interests (high quality programming) and financial interests (commercial success) in the development of radio broadcasting industry. As a matter of fact, the distinctive mixed public-private broadcasting system of Canada emerged from particular Canadian conditions as well as the technical and economic factors during that period. On the one hand, the limited local market size and audience number failed to attract large amount of advertising revenue to survive the broadcasting in a highly commercialized environment as in the US. On the other hand, the federal government intended to control the spread of direct advertising and protect domestic radio broadcasting by set up licensing policy. The debate on public ownership of radio broadcasting lasted for years, and both sides agreed that government assistance was necessary. The pressure of financing the radio broadcasting system in combination with the needs of a co-existing of both press and radio broadcasting in Canada brought in the distinctive mixed public-private broadcasting system in the late 1930s.

By reviewing the history of radio broadcasting of the three nations, one can easily observe that government plays an indispensible role in regulating the development of radio broadcasting industry, basically by means of licensing policies and varied legislations. From my perspective, the fundamental function of legislation and law in a society is to solve problems, to regulate the entire social activities and to maximize public interests and personal interests while keep a balance between the two. Besides that, we should not ignore that the legislation and law passed by a government also represent the will of the nation and protect its interests in both domestic and international competition. With regard to radio broadcasting industry in the US, Britain and Canada, distinctive laws passed by government represent the government’s will on how to regulate, guide and curb the development of this industry. The US privately-owned system based on the commercial profitability, and the British state-owned system based on a concept of radio as a “public service”, as well as the Canadian mixed public-private broadcasting system were all established by means of government legislation. During the process, though different nations were facing their specific situation, the aim of regulating the development of radio broadcasting industry was the same: to keep a balance between public interest and commercial success and also strive to maximize both of them. This aim necessitates the intervention of government in any kind of political or economic system, since the practitioners’ self-regulation and the “invisible hand” of free market is not always reliable, especially in a field as important as radio broadcasting and mass communication.

In Canadian context, conflict exists between the federal and provincial governments over jurisdiction to pass legislation regulating broadcasting and new media. As it is demonstrated in the Reference on Regulate and Control Radio Communication to the Supreme Court of Canada in 1931 and the Quebec Public Service Board v. Dionne Case in 1978 and so forth, the debate on who has the authority in regulating and controlling radio and television broadcasting and new media has become an eye-catching issue in Canada jurisdiction history. Based on my understanding, there are particular cultural, political as well as historical implications lie in the conflict, which make the issue sensitive and disputative.

As Rinfret J. and Lamont J. argued in the Reference 1931, the jurisdiction of federal government over radio communication is not exclusive, and the wide jurisdiction must be conceded to the Parliament only in the international field where control can only be assured by agreement or treaty between nations. However, as they argued, this issue became different in respect to the capturing of waves and the delivery of the messages they contain. Since the radio transmitting and receiving sets (cables in the Quebec Public Service Board v. Dionne Case) are all property operating within the province, the services they provide including capturing the wave and delivering the message are “localized”, and the resident of a province has the right to use them freely. Any legislation by the federal government that controls or limits the use of such property is an offense to the property and civil rights in the province. Accordingly, the authority to regulate and control that radio communication would be assigned to the provincial legislatures by B.N.A. Act, s.92.

However, as it was asserted by the other side of the debate, such argumentation was unduly simplistic and failed to consider the effects and outcomes of the uses of technology, more specifically, the uses of radio broadcasting as an important measure of information communication. The majority of the justice in these cases agreed that, B.N.A. Act, s.92 removes those works and undertakings which “connecting the Province with any other or others of the Provinces, or extending beyond the Limits of the Province” from provincial authority, and radio/television broadcasting surely falls into this category. More importantly as they argued, the issue of radio broadcasting is not merely dealing with a transmitter or a receiver simply as pieces of property and equipment, it is dealing with information communication by means of these properties, and the effects of that means of communication cannot be confined within the limits of the province. As a matter of fact, it is undoubted that effects and influences of radio or television broadcasting are enormous and nation-wide.

As mentioned above, the functions of radio or television broadcasting as new measures of mass communication cover from informing to entertaining to educating and socializing the audience by varied programs and information they provide. The linguistic, cultural, ideological and political inclinations conveyed in the programs are so crucial for a unified nation that it has to be regulated and controlled in the authority of federal government. It is the only way to avoid a messy and troublesome situation in the development of this industry and to maintain the stability and unity of the nation.

The issue of Canada’s long-standing debate on regulating new media reminds me what happened earlier this year when Google stopped its Chinese search site ( in March and moved its branch from mainland China to Hong Kong in order to protest the content censorship requirement and control from Beijing the central government. In this case there is also a conflict lies in the uses of internet search engine as a new technology: on the one hand it is Google’s commercial profitability and the faith of freedom of expression as the cornerstone of a free democratic society; on the other hand it is the will and interest of Chinese government who wishes to keep a “harmonious society” (or for the “greater good”) by censoring the internet content. I’m not advocating strong censorship policies over online expression here; I personally have had enough experience of that. But I do admit that everything exists for a reason. The implications and impacts of new technology, in this case it is the use of internet search engine which would provide access to any kind of information to its users, have to be considered seriously, and the particular conditions of the nation must be taken into account. Sometimes the effects of new media technologies are so huge that strong government control and intervention is a must, this applies in any political, economic, cultural or ideological system. (Interestingly, not until last week, Chinese government issued a new ICP license to Google and allowed it to continue to provide web search and local products to users in China. Though users can click a link and search via Hong Kong to get the uncensored results, no compromise has been made on content censoring in the Chinese search site. Obviously, commercial profitability wins out in this case since Google's stock price has dropped about 18% since it pulled out of China.)

From my perspective, nothing is more complicated than dealing with social life issues and human nature. When a new media technology has been invented, no matter it is the printing press, the telegraphy, or radio, television and internet, its effects and implications have always turned out to exceed our expectations. Effective regulation means a more robust industry and a more ordered society which benefits from the new technologies. That is the reason why in a modern democratic state, the executive, the legislature and the judiciary systems need to play important roles in regulating the development and the use of new media.

Apr 2, 2010

Lab Report - Hope and Maslow's hierarchy of needs

Jing Ke
Mar, 2010
Course Title: Research Method

Lab Report - An exploration on the biggest hopes in contemporary Canadian undergraduate student: related to Maslow's hierarchy of needs theory


From the psychological perspective, hope has been broadly characterized as the “will” and the “ways” to achieve goals (Snyder, 2002). It is a reflection of positive needs and motivations in human psychology. Maslow (1943) proposed a hierarchy of needs theory to depict five layers of human needs, this theory suggests that people are motivated to fulfill basic needs before moving on to the higher needs. By means of this, to study people’s hopes and relate them to Maslow’s hierarchy of needs is a good way to investigate their psychological status and wellbeing.

Undergraduate student is a unique group in social demographics. On the one hand, they are young adult stepping into the society and start to frame their individual personalities and worldviews. On the other hand, they don’t possess economic independence and still bond closely to their families. Psychologists suggest that to care about these students’ hopes and motivations may help them to reach their education-related goals and become more hopeful in life (Snyder et al., 2003). The aim of this study is to find out the status of “hopefulness” in a group of undergraduate Canadian students and analyze how their hopes can be related to Maslow's Hierarchy of Needs Theory.

Based on preceding analysis, two research questions are developed for this study:
1. What are the biggest hopes of the sampled Canadian undergraduate student?
2. What are the possible incentives of these hopes and how do they relate to Maslow's Hierarchy of Needs Theory?

In this study, a group of undergraduate students from University of Ottawa served as the sample, and data is generated from the sample by presenting them the question “What are your biggest hopes?” Each of the students created a collage plus a text description to express their answers. In this study, only the text data will be used since the collages contain almost the same information. Content analysis research method will be adopted.

Review of Literature

Hope is a desire accompanied by expectation of or belief in fulfillment (Merriam-Webster Online Dictionary, 2010). Snyder and colleagues (Snyder et al., 1991) have introduced a new cognitive, motivational model called Hope Theory. According this theory, hope reflects individuals’ perceptions regarding their capacities to: (1) clearly conceptualize goals, (2) develop the specific strategies to reach those goals (pathways thinking), and (3) initiate and sustain the motivation for using those strategies (agency thinking). This theory also suggests that a goal can be anything that an individual desires to experience, create, get, do, or become. It may either be a significant, lifelong pursuit or be mundane and brief (Snyder et al. 2003).

High-hope and low-hope individuals are distinguished according to their perceived probabilities of attainment. Snyder et al. (1991, 1996) argued that high-hope individuals tend to prefer “stretch goals” that are slightly more difficult than previously attained goals and develop alternative strategies to achieve goals, especially when the goals are important and when obstacles appear. Up to this point, high-hope people are more likely to achieve success and have greater perceived purpose in life (Snyder et al., 2003).

As mentioned, hope is a reflection of people’s desires and expectations, which can be related to Maslow’s interpretation of human needs. Maslow (1943) outlined a hierarchy of needs theory which divides human needs into five levels:

1. Physiological Needs
These include the most basic needs that are vital to survival, such as the need for water, air, food and sleep. Maslow believed that these needs are the most basic and instinctive needs in the hierarchy because all needs become secondary until these physiological needs are met.

2. Security Needs
These include needs for safety and security. Security needs are important for survival, but they are not as demanding as the physiological needs. Examples of security needs include a desire for steady employment, health insurance, safe neighborhoods and shelter from the environment.

3. Social Needs
These include needs for belonging, love and affection. Maslow considered these needs to be less basic than physiological and security needs. Relationships such as friendships, romantic attachments and families help fulfill this need for companionship and acceptance, as does involvement in social, community or religious groups.

4. Esteem Needs
After the first three needs have been satisfied, esteem needs becomes increasingly important. These include the need for things that reflect on self-esteem, personal worth, social recognition and accomplishment.

5. Self-actualizing Needs
This is the highest level of Maslow’s hierarchy of needs. Self-actualizing people are self-aware, concerned with personal growth, less concerned with the opinions of others and interested fulfilling their potential.

Maslow’s hierarchy of needs is most often displayed as a pyramid. The lowest levels of the pyramid are made up of the most basic needs, while the more complex needs are located at the top of the pyramid. Once the lower-level needs have been met, people can move on to the next level of higher needs (Koltko-Rivera, 2006).

Although no literature has been found to support my viewpoint, I maintain that to study people’s hopes and relate them to Maslow’s hierarchy of needs theory is an interesting way to interpret people’s needs and willingness. In this study specifically, by analyzing the biggest hopes of sampled students, we can code them into matched levels in Maslow’s hierarchy of needs, demonstrate what kind of need they are, and this will further bring out more meaningful findings.

Research Method

Content analysis is a research technique for the systematic classification and description of communication content according to certain usually predetermined categories (Wright, 1986). It may involve quantitative or qualitative analysis, or both.

Quantitative content analysis is the systematic and replicable examination of symbols of communication, which have been assigned numeric values according to valid measurement rules and the analysis of relationships involving those values using statistical methods, to describe the communication, draw inferences about its meaning, or infer from the communication to its context of production and consumption (Riffe et al, 2005). Meanwhile, qualitative content analysis employs approaches like discourse analysis, rhetorical analysis, ethnographic analysis, and conversation analysis, etc. (Atheide, 1996)

This study adopts both quantitative (refers to the statistical analysis of collected data) and qualitative (refers to the interpretation of texts and statistics) content analysis. Detailed data collection and data analysis procedures are as follows:

Sampling and Data Collection
The sample capacity of this study is relatively small, the texts generated by a group of undergraduate student from UofO served as data source. After an early filter of the texts, a 673-word 2-page MSW document gathered from seven participants is used as the transcription for coding procedure and further analysis.

Data Analysis
Coding procedure involves scrutinizing the content of transcription, highlighting the key words and sentences that answer the question “What are your biggest hopes?” and finally centralizing the meanings into single words and phrases (codes). As a result, ten codes are generated in this procedure. Besides that, the times each code being mentioned are also calculated. In sequence these codes are: success, love, family, health, happiness, owns a house, travels the world, all dreams come true, end poverty, and end climate change.

In the next step, findings from the coding procedure are therefore coded into the five levels in Maslow’s hierarchy of needs, which served as “categories” in this study. These categories are: physiological, safety, love/belonging, esteem, and self-actualization. Similar to the former step, the number of codes belongs to each category as well as its percentage is calculated. The statistics generated in these steps makes the findings of this study more observable and persuasive. Moreover, further interpretation and discussion will focus on both codes, categories, as well as the statistics.


Quantitative Findings
The quantitative findings of this study are summarized in Table 1 and Table 2. Table 1 indicates the codes emerged by content analysis while Table 2 reveals the statistical analysis of codes and categories. The data derived in Table 2 is based on the results in Table 1.

Table 1 Findings of coding procedure

Table 2 Further analysis related to Maslow’s hierarchy of needs

Qualitative Findings
Based on preceding quantitative findings, some qualitative analyses can be carried out. One very first and fundamental finding is, as the codes reflect, all “biggest hopes” generated from the students are conventional and positive, which indicates that the psychological statuses of the sample are in a generally good condition. The students also acquire the capacity to “clearly conceptualize goals” (Snyder et al., 1991), which is the first step to hope attainment according to Hope Theory.

As to the second finding, according to Table 1, the biggest hope for the majority of participating undergraduate student is to obtain success in their ongoing study and future career, rather than the acquirement of love, family, health, happiness, etc. which are considered to be second-to-none important in traditional values. This may probably be explained by the worship of success stories, the severe peer competition, and the prevailing commercialism and utilitarianism in contemporary society. It is obvious that such tendencies have inevitably influenced the values and worldviews of youth today, making them more materialistic and pragmatic than before.

The second finding can be reinforced by data in Table 2. As we can see, the levels in Maslow’s hierarchy of needs which represent the students’ hopes are, in sequence, Esteem (41.38%) - Safety (37.93%) - Love/Belonging (20.69%). Combined with Maslow’s (1943) discourse, the sampled students take things that related to self-esteem, personal worth, social recognition and accomplishment as a priority. They also pay much attention to guarantees on safety and security, such as steady employment and shelter from the environment. Compared with these needs, the need for love and belonging has been put to a lower position. Though obviously, the need of love and belonging is still intense for young adult today, the priority has been replaced by the zest for success.

Conclusively, the principal qualitative findings of the study can be summarized as: Firstly, in general, the sampled students are in good and positive psychological status. Secondly, their life goals and human needs are more materialistic and pragmatic than expected.

Interpretation and Discussion

The study on students’ biggest hopes in combination with Maslow’s hierarchy of needs theory provides us an alternative access to understand contemporary Canadian undergraduate students’ psychological statuses and needs. As we can see from the findings, what the students currently desire most in their lives are directly shaped and influenced by the macro environment of the society. They tend to give more credit to personal achievement and material guarantees than love and belongings. Such findings indicate that, on one hand, today’s undergraduate students are aware of the highly competitive pressure in society and wish themselves to be independent and respectful individuals in future. On the other hand, it is possible that they are still not mature enough to realize the importance of love, family and friend in their lives thus take them for granted. Further studies are needed to find out the real incentives and rationalization of such phenomenon.

However, despite its significance, the study itself is far from perfect. A main weakness is the limitation of sample capacity, which directly threatens the validity of findings. The texts used for data analysis are 673 words and generated from only seven students, which is obviously not sufficient enough for a valid statistical analysis. Another restriction is the lack of chronological comparative data. As I proposed, the second finding indicates that the students today are becoming more materialistic and pragmatic than before, however, this argumentation is derived partially from the data collected, partially from my intuition, observation and life experiences. If more data is available to accomplish a chronological comparison, the findings would be more powerful.

To sum up, the study on contemporary Canadian undergraduate students’ biggest hopes combined with Maslow’s hierarchy of needs is meaningful and significant. However, what needs to be improved in future studies is to develop more systematic data collection and well-organized data analysis procedures for a mature research design.


Atheide, D.L. (1996). Qualitative Media Analysis. Thousand Oaks: Sage.
Goebel, B.L. & Brown, D.R. (1981). Age differences in motivation related to Maslow's need hierarchy. Developmental Psychology. 17(6), 809-815.
hope. (2010). In Merriam-Webster Online Dictionary. Retrieved March 15, 2010, from
Koltko-Rivera, M.E. (2006). Rediscovering the later version of Maslow’s hierarchy of needs: Self-transcendence and opportunities for theory, research, and unification. Review of General Psychology, 10(4), 302-317.
Maslow, A.H. (1943). A theory of human motivation. Psychological Review. 50(4). 370-396.
Riffe, D., Lacy, S. and Fico, F.G. (2005). Analyzing Media Messages. (2nd Edition). New Jersey: Lawrence Erlbaum.
Snyder, C. R., Harris, C., Anderson, J. R., Holleran, S. A., Irving, L. M., Sigmon, S. T. et al. (1991). The will and the ways: Development and validation of an individual-differences measure of hope. Journal of Personality and Social Psychology, 60, 570–585.
Snyder, C. R., Sympson, S. C., Ybasco, F. C., Borders, T. F., Babyak, M. A., & Higgins, R. L. (1996). Development and validation of the State Hope Scale. Journal of Personality and Social Psychology, 70, 321–335.
Snyder, C.R. (2002). Hope theory: Rainbows in the mind. Psychological Inquiry, 13, 249-275.
Snyder, C.R., Lopez, S.J., Shorey, H.S., Rand, K.L., and Feldman, D.B. (2003). Hope theory, measurements, and applications to school psychology. School Psychology Quarterly, 18(2), 122-139.
Snyder, C.R., Shorey, H.S., Cheavens, J., Pulvers, K.M., Adams, V.H.III, and Wiklund, C. (2002). Hope and academic success in college. Journal of Educational Psychology, 94(4), 820-826.
Wright, C. R. (1986). Mass Communication: A Sociological Perspective (3rd ed.). New York: Random House.

Mar 27, 2010

Grounded Theory

Jing Ke & Sarah Wenglensky
Feb, 2010
Course Title: Research Method

Grounded Theory - Handout

It’s a world view that says not to have a world view when doing research.
---Created by us

The methodology of grounded theory was developed by American sociologists Glaser and Strauss in 1967 to describe a new qualitative research method they used in their research Awareness of Dying in 1965. In this study, they adopted an investigative research method with no preconceived hypothesis and used continually comparative analysis of data. They believe that the theory obtained by this method is truly grounded in the data. For this reason they named the methodology “grounded theory” (Glaser & Strauss, 1967).

The goal of the grounded theory approach is to generate a theory that explains how an aspect of the social world “works”. The goal is to develop a theory that emerges from and is therefore connected to the very reality that the theory is developed to explain.

Key definitions
According to Creswell (2009), grounded theory is “a qualitative strategy of inquiry in which the researcher derives a general, abstract theory of process, action, or interaction grounded in the views of participants in a study.” (p. 13 & 229) This process involves using multiple stages of data collection and the refinement and interrelationships of categories of information (Charmaz, 2006; Strauss and Corbin, 1990, 1998).

Other definitions of grounded theory:
Grounded theory is “a systematic qualitative research methodology in the social sciences emphasizing generation of theory from data in the process of conducting research.” (Martin, et al. 1986)

“The grounded theory approach is a qualitative research method that uses a systematic set of procedures to develop an inductively derived grounded theory about a phenomenon.” (Strauss and Corbin, 1990)

A complete grounded theory research design often contains the elements listed in Table 1. These steps may not be undertaken sequentially in the research; the researchers sometimes need to go back and forth amongst several steps.

Table 1 General elements in a grounded theory research design
1. Question formulating
2. Theoretical sampling
3. Interview transcribing and Contact summary
4. Data chunking and Data naming – Coding
5. Developing conceptual categories
6. Constant comparison
7. Analytic memoing
8. Growing theories

Defining features
Two primary characteristics of grounded theory research design:

1) the constant comparison of data with emerging categories and,
2) theoretical sampling of different groups to maximize the similarities and differences of information (Creswell, 2009, p.13).

Current uses of grounded theory
Grounded theory is a powerful research method for collecting and analyzing data. Traditional research designs which usually rely on a literature review leading to the formation of a hypothesis. Then one tests the hypothesis through experimentation in the real world.

Grounded theory investigates the actualities in the real world and analyses the data with no preconceived ideas or hypothesis (Glaser & Strauss, 1967). In other words, grounded theory suggests that theory emerges inductively from the data (Chesebro & Borisoff, 2007). Though it can be used in different types of research, grounded theory is often adopted to formulate hypotheses or theories based on existing phenomena, or to discover the participants’ main concern and how they continually try to resolve it (Glaser, 1992).

Strengths and weaknesses
Due to the difficulties and weaknesses encountered when applying grounded theory, this methodology is still not widely used or understood by researchers in many disciplines (Allan, 2003).

An effective approach to build new theories and understand new phenomena
High quality of the emergent theory
Emergent research design reflects the idiosyncratic nature of the study
Findings and methods are always refined and negotiated
Requires detailed and systematic procedures for data collection, analysis and theorizing
The resulting theory and hypotheses help generate future investigation into the phenomenon
Requires the researcher to be open minded, and able to look at the data through many lenses
Data collection occurs over time, and at many levels, helping to ensure meaningful results

Huge volumes of data
Time consuming and painstakingly precise process of data collection/analysis
Lots of noise and chaos in the data
Prescribed application required for the data-gathering process
There are tensions between the evolving and inductive style of a flexible study and the systematic approach of grounded theory.
It may be difficult in practice to decide when the categories are “saturated” or when the theory is sufficiently developed
It is not possible to start a research study without some pre-existing theoretical ideas and assumptions
Requires high levels of experience, patience and acumen on the part of the researcher

Data Collection
This is not to suggest that classic grounded theory is free of any theoretical lens but rather that it should not be confined to any one lens; that as a general methodology, classic grounded theory can adopt any epistemological perspective appropriate to the data and the ontological stance of the researcher (Holton, 2009).

Data collection of grounded theory is directed by theoretical sampling, which means that the sampling is based on theoretically relevant constructs. It enables the researcher to select subjects that maximize the potential to discover as many dimensions and conditions related to the phenomenon as possible (Strauss & Corbin, 1998).

Many experiments, in their early stages, use the open sampling methods of identifying individuals, objects or documents. This is so that the data’s relevance to the research question can be assessed early on, before too much time and money has been invested (Davidson, 2002).

Grounded theory data collection is usually but not exclusively by interviews. Actually, any data collection method can be used, like focus groups, observations, informal conversation, group feedback analysis, or any other individual or group activity which yields data (Dick, 2005).

Interview transcribing is probably one of the most time-consuming parts of the research. The researchers are suggested to transform the tape recordings of interviews and other notes into word-by-word transcripts for further analysis. However, some researchers (Glaser, 1992, Dick, 2005) argue that taking key-word notes during the interviews, tape-recording the interviews and checking the notes against the tape recording and converting them to themes afterwards can also do the job well, and is less time-consuming.

Data Analysis and Interpretation
I believe grounded theory draws from literary analysis, and one can see it here. The advice for building theory parallels advice for writing a story. Selective coding is about finding the driver that impels the story forward. (Borgatti)

Grounded theory data analysis involves searching out the concepts behind the actualities by looking for codes, then concepts and finally categories.

1. Codes: coding is a form of content analysis to find and conceptualize the underlying issues amongst the “noise” in the data. During the analysis of an interview, the researcher will become aware that the interviewee is using words and phrases that highlight an issue of importance or interest to the research; they are noted and described in a short phrase. The issue may be mentioned again in the same or similar words and is again noted. This process is called coding and the short descriptor phrase is a code (Allan, 2003).


“Pain relief is a major problem when you have arthritis. Sometimes, the pain is worse than other times, but when it gets really bad, whew! It hurts so bad, you don't want to get out of bed. You don't feel like doing anything. Any relief you get from drugs that you take is only temporary or partial.” (interviewee)

One thing that is being discussed here is PAIN. Implied in the text is that the speaker views pain as having certain properties, one of which is INTENSITY: it varies from a little to a lot. (When is it a lot and when is it little?) When it hurts a lot, there are consequences: don't want to get out of bed, don't feel like doing things (what are other things you don't do when in pain?). In order to solve this problem, you need PAIN RELIEF. One AGENT OF PAIN RELIEF is drugs (what are other members of this category?). Pain relief has a certain DURATION (could be temporary), and EFFECTIVENESS (could be partial).

Coding procedures in Grounded Theory Approaches
Strauss and Corbin (1990) describe some flexible guidelines for coding data when engaging in a Grounded Theory analysis:

Open Coding: form initial categories of information about the phenomenon being studied from the data gathered. This is “the process of breaking down, examining, comparing, conceptualizing, and categorizing data” (p. 61).

Axial Coding: involves assembling the data in new ways after open coding. A coding paradigm (logic diagram) is then developed which:
Identifies a central phenomenon

Explores causal conditions
Identifies the context and intervening conditions
Specifies strategies
Delineates the consequences

Selective Coding: involves the integration of the categories in the axial coding model. In this phase, conditional propositions (or hypotheses) are typically presented. The result of this process of data collection and analysis is a substantive-level theory relevant to a specific problem, issue or group. It is “the process of selecting the core category, systematically relating it to other categories, validating those relationships, and filling in categories that need further refinement and development” (p. 116).

Note 1: the three types of coding are not necessarily sequential; they are likely to overlap. After collecting additional data, the researchers return to analyzing and coding data, and use the insights from that analysis process to inform the next iteration of data collection. This process continues until a strong theoretical understanding of an event, object, setting or phenomenon has emerged. (Constant Comparative Method)

Note 2: as mentioned, the process of naming or labeling objects, categories, and properties is known as coding. Coding can be done very formally and systematically or informally. In grounded theory, it is normally done quite informally. For example, if after coding much text, some new categories are invented; grounded theorists do not normally go back to the earlier text to code for that category. However, maintaining an inventory of codes with their descriptions (i.e., creating a codebook) is useful, along with pointers to text that contain them. In addition, as codes are developed, it is useful to write memos known as code notes that discuss the codes. These memos become fodder for later development into reports.

2. Concepts: codes are then analyzed and those that relate to a common theme are grouped together. This higher order commonality is called a concept (Allan, 2003).

Note: based on our understanding, the process of inducting concepts is overlapping with the three types of coding process (mentioned above). They are basically the same but different researchers give them different descriptions according to their specific research experience.

3. Categories: concepts are then grouped and regrouped to find yet higher order commonalities called categories. It is these concepts and categories that lead to the emergence of a theory (Allan, 2003).

"An effective strategy is, at first, literally to ignore the literature of theory and fact on the area under study, in order to assure that the emergence of categories will not be contaminated by concepts more suited to different areas." (Glaser & Strauss, 1967)

Note: according to Strauss and Corbin (1998), grounded theory has particular types of prescribed categories as components of the theory. But this may not appear appropriate for a particular study.

To Recap: developing a grounded theory model involves systematically analyzing a phenomenon in order to explain how the process occurs inductively (Strauss & Corbin, 1998).

Standards of Validation
Strauss & Corbin (1990) state that there are four primary requirements for judging grounded theory:
1) It should fit the phenomenon, provided it has been carefully derived from diverse data and is adherent to the common reality of the area;
2) It should provide understanding, and be understandable;
3) Because the data is comprehensive, it should provide generality, in that the theory includes extensive variation and is abstract enough to be applicable to a wide variety of contexts; and
4) It should provide control, in the sense of stating the conditions under which the theory applies and describing a reasonable basis for action.

Grounded theory is about adopting a constant comparative method, therefore the conformity and coherence of codes, concepts and categories is also an important indicator for a valid grounded theory. This means that a grounded theory is reliable when there comes no new categories in the data collected. This means one can say the theory is sufficiently developed.

The process under which the theory has been developed can evaluate the quality of a theory. This contrasts with the scientific perspective that how you generate a theory is not as important as its ability to explain new data.

The researcher should not switch their focus from abstraction to description as concepts emerge. Detailed description offers data for conceptual abstraction and the possible emergence of a grounded theory in the future, but cannot be considered grounded theory.

Deciding to use grounded theory means embracing it fully (not pieces of it). It requires the adoption of a systematic set of precise procedures for collection, analysis and articulation of conceptually abstract theory.

Report Writing and Rhetorical Structure
Glaser and Strauss (1967) describe 4 main stages in building grounded theory:

1. Comparing incidents applicable to each category
Begin by coding the data into as many categories as possible. Some categories will be generated by the researcher, and some from the language and data of the research situation. As more instances of the same category code are found ideas about that category can be refined. At this point it's best to stop coding and make a memo of these ideas.

2. Integrating Categories and their Properties
The constant comparative method will begin to evolve from comparing incidents to focusing on emergent properties of the category. Diverse properties will start to become integrated. The resulting theory will begin to emerge by itself.

3. Delimiting the Theory
Eventually the theory comes together, and there are fewer changes to the theory as the researcher compares more incidents. Later modifications include taking out irrelevant properties of categories, and adding details of properties into an outline of interrelated categories. More importantly, the researcher begins to find ways to delimit the theory with a set of higher level concepts. The researcher needs to generalize the theory more as they continue to make constant comparisons against it. The number of categories will be reduced.

New categories are often created halfway through coding, and it usually isn't necessary to go back and code for them. The researcher only needs to code enough to saturate the properties of the category. Later the researcher can evaluate the categories and emergent theory by moving on to new comparison groups.

4. Writing Theory
"When the researcher is convinced that his analytic framework form a systematic substantive theory, that it is a reasonably accurate statement of the matters studied, and that it is couched in a form that others going into the same field could use -- then he can publish his results with confidence" (p. 113).

A Review of Study: Qualitative Tussles in Undertaking a Grounded Theory Study
This paper, by Judith A. Holton, is a methodological critique of Classic Grounded Theory (as developed by Glaser). Holton attempts to identify and clarify some of the key misconceptions in the use and understanding of Grounded Theory. She uses examples of research studies that have been performed under the guise of grounded theory, but are only using fragments of the grounded theory methodology. Holton explains how this does not constitute true grounded theory research.

Key Points
Personal bias: grounded theory literature often states the need to have no preconceived notions or frameworks in mind when conducting the research. It seems impossible to ignore ones worldview (and it is). The point is to be able to look at the phenomenon and emerging data from many lenses.

The data fit: one of the biggest problems (as seen by classic grounded theorists) is when researchers dismiss data altogether because it does not “fit”. In grounded theory the data that does not fit established theories and frameworks is the important data! This is what will lead to a totally new view/interpretation of the phenomenon under study.

Giving in: there is a tendency for researchers who undertake grounded theory to fold, or become lenient in their application of the rigid and time consuming process of data analysis. Grounded theory is time consuming and often frustrating. This must be understood and embraced if the process is to be successful.

Description vs. explanation: explanation of patterns of behaviour is the ultimate goal of grounded theory research. Description of what is happening is often seen as a substitution. These two outcomes are not interchangeable. It is not about accuracy of description, it is about conceptual abstraction, resulting in conceptual hypotheses.

Role of context: the context of the study should not influence data analysis from the outset. The context should be seen as another piece of the puzzle that may or may not be of importance. If it is of importance this will emerge naturally from the participants.

1. What is the phenomenon of interest?
2. Does grounded theory best suit the study of the phenomenon?
3. Is there existing literature on the specific area of interest?
4. Are there theories that adequately explain the occurrences within the phenomenon?
5. What is the role of the researcher in the study?
6. Is the body of literature acting as additional data?
7. Is it ensured the context does not influence data analysis?
8. What is the researchers relationship to the study?
9. What precautions will be taken to ensure unbiased approach of the researcher?
10. How will constant comparative analysis occur?
11. Who are the subjects of interest?
12. What is the data collection method?
13. What are the coding procedures?
14. How will relationships between concepts be identified and categorized?
15. Are the results new explanations of relationships?
16. Is the process constantly reflexive?

Conclusions and Recommendations
The value of grounded theory is in its ability to examine relationships and behaviour within a phenomenon from an unbiased in-depth perspective. That is to say, when a researcher enters a study with no framework or theory they are wish to fit the data into the doors are open to discovering explanations that have yet to be articulated. More importantly, the explanations ultimately come from the participants being studied. When a grounded theory study is executed correctly and rigorously, there is little chance that the resulting explanations have distorted by the researchers personal worldview.

The time and detailed analysis required to properly execute grounded theory methodology makes its use daunting and limited. There are many variables that must be in place (i.e. resources, experience of researcher, acceptance of methodological processes etc…) in order for grounded theory to be successfully carried out. When this occurs the results can be invaluable to the understanding of social phenomena.

Allan, G. (2003). A critique of using grounded theory as a research method. Electronic Journal of Business Research Methods. 2(1).
Charmaz, K. (2006). Constructing grounded theory. Thousand Oaks, CA: Sage.
Chesebro, J.W., & Borisoff, D.J. (2007). What makes qualitative research qualitative? Qualitative Research Reports in Communication. 8(1), 3–14
Creswell, J.W. (2009). Research Design: Qualitative, Quantitative, and Mixed Approaches. Thousand Oaks, CA: Sage.
Dick, B. (2005). Grounded theory: a thumbnail sketch. [On line] Available at
Glaser, B. (1992). Basics of grounded theory analysis. Mill Valley, CA: Sociology Press.
Glaser. B.G. & Strauss, A.L. (1967). The Discovery of Grounded Theory. Strategies for Qualitative Research. Aldine Transaction, Inc.
Holton, J. A. (2009). Qualitative Tussles in Undertaking a Grounded Theory Study The Grounded Theory Review, 8(3), 37-49.
Martin, et al. (1986). Grounded Theory and Organizational Research. The Journal of Applied Behavioral Science, 22(2), 141.
Strauss, A., & Corbin, J. (1990). Basics of qualitative research: Grounded theory procedures and techniques (1st ed.). Newbury Park, CA: Sage.
Strauss, A., & Corbin, J. (1998). Basics of qualitative research: Grounded theory procedures and techniques (2nd ed.). Newbury Park, CA: Sage.