Thursday, January 3, 2013

Research on Code of Ethics

Harrington (1996) studies the affect of codes of ethics on computer abuse judgments and intentions of information systems employees. When studying the codes of ethics, Harrington (1996) investigates both the company and information system’s codes of ethics. Her research fills the literature gap that calls for more research in the area of effectiveness with codes of ethics. Many studies discuss the controversial topic of ethics and personality traits affecting ethics. She states that codes are assumed to have an impact on the decision making processes of employees, but empirical investigations on information system personnel is scarce. Additionally, she points out that the generic company codes of ethics do not specifically address information system (IS) personnel.

Harrington (1996) surveyed over 200 IS employees in nine organizations on the topics of cracking, illegal software copying, sabotaging competitor’s security, spreading viruses, and fraud using vignettes and questionnaires. The vignettes offered a less intimidating method of responding to the sensitive topics. The study’s objective involved discovering the intentions of the employees when placed in specific circumstances. The five point Likert scale determined the intention score and the presumed ethicality. Also a factor analysis checked the validity of the results through the use of varimax rotation of all the statements.

Through an ANOVA analysis, Harrington (1996) discovered that generic codes of ethics have little direct affect on computer abuse judgments. Similarly, specific IS codes of ethics had little direct relationships on computer abuse. Additionally, the research showed little differential measurement error in the independent variable across levels of the moderator which is a situation that could cause bias. Finally, the research also shows that the code of ethics has little effect on sabotage and fraud. Harrington (1996) concluded that supervisors must take a multifaceted approach to preventing computer abuse and not to rely solely on the codes of ethics.

Harrington, S. J.(1996). The effect of codes of ethics and personal denial of responsibility on computer abuse judgments and intentions. MIS Quarterly, 20(3), 257-278.Retrieved from

Perceived Competitive Performance

Nidumolu and Knotts (1998) researched the software development management to identify patterns or differences when the company experienced intense competition. Their hypothesis focused on the effect of reusability and customizability on software process flexibility and predictability and on the software firm’s perceived competitive performance. In order to answer their hypothesis, they leveraged manufacturing strategy research and perceived competitive performance.
The study included an examination of product cost efficiency, market responsiveness, process flexibility, process predictability, reusability and customizability. The study utilized the American Software Association (ASA) as the sampling framework. The authors mailed a questionnaire to 100 firms selected from the ASA. Fifty-Eight chief technology officers returned the questionnaires for the analysis.

Through the analysis, the Pearson correlations between the constructs suggested a priori that there were many significant relationships. To clear up any discrepancies from the Pearson correlation, they ran a path analysis to clarify results. Through the investigation, the authors concluded that customizability does have a significant influence on process predictability and flexibility and, therefore, on perceived competitive performance as well.

Nidumolu, S. R., & Knotts, G. W. (1998). The effects of customizability and reusability on perceived process and competitive performance of software firms. MIS Quarterly, 22(2), 105-137. Retrieved from

Cloud & RackSpace

Buyya, Broberg, and Goscinski (2011) define cloud computing architecture as “a techno-business disruptive model of using distributed large scale data centers either private or public or hybrid offering customers a scalable virtualized infrastructure or an abstracted set of services qualified by service level agreements and charged only by the abstracted IT resources consumed” (p. 44). In order to completely grasp the aforementioned definition some prior knowledge is required. The techno-business disruptive model refers to the opportunities that business can select today. In the past, business needed to acquire and maintain expensive infrastructure and data center in order to host any services. Now, businesses do not need the hardware. In essences, they can lease the hardware while having a third party maintain it as well. Additionally, Buyya, Broberg, and Goscinski (2011) elaborate on several categories of cloud computing, primarily concentrating on public, private, and hybrid. The public cloud offers several utilities and services to the general public in a pay as you go manner. The private cloud provides similar services, but can only be accessed by the internal organization and not the general public. The third category, hybrid, bases its foundation on a private cloud, but references tools and resources from the public cloud.

Buyya, Broberg, and Goscinski (2011) describe the cloud well; however, many definitions and understandings of the cloud architecture exist. The concept of the cloud has been “coined as an umbrella term to describe a category of sophisticated on demand computing services…” (Buyya, Broberg, & Goscinski, 2011, p. 3). Tom Bittman describes the cloud as the elastic and scalable IT capabilities delivered as a service to consumers through Internet technologies. He continues to explain the reasoning for the growth in cloud computing as developing through the expansion of worldly connectivity through the Internet, evolution of sharing technologies (both software and hardware), progress with service oriented interfaces, and automation (both software and hardware). Furthermore, he focuses on the popular intensification of the cloud. He states that the speed, agility, and low burden entry of leveraging the cloud has caught everyone’s attention. Startup firms along with small and medium size businesses no longer require data centers to house, host or supply data. Warehouses of server farms ( provide the backbone of the hardware infrastructure for the cloud which eases the burden of every corporation needing a data center.

More importantly, the cloud fits the needs of the customer. The scalability factor alone benefits many businesses as they only pay for what they use. Additionally, different levels of the cloud allow businesses to select what they need. Infrastructure as a Service (IaaS) offers virtualized resources, such as computation, storage and communication. Platform as a Service (PaaS) offers an environment where developers can create and deploy software. Finally, Software as a Service (SaaS) offers web applications for the end user, alleviating the business from supporting the software.
One cloud solution that I have heard and read a lot about, but never really researched until now is Rackspace. Rackspace supports all three levels of cloud intervention: public, private, and hybrid. Additionally, Rackspace offers the traditional perks of a cloud solution: pay as you go, all of the service packages (IaaS, PaaS, and SaaS), and complete manageability. Furthermore, Rackspace advertizes its speed and agility to provide a business service even when there is a moving target of resources that is required.

Rackspace has an extremely impressive resume. Rackspace can provide all the solutions that a business would need in terms of IaaS, PaaS, and SaaS. Additionally, Rackspace can setup and support private clouds where the equipment actually resides within the business’ walls, likewise with their hybrid cloud support. Moreover, the Rackspace infrastructure holds security very high with multiple safeguards such as firewalls, antivirus, antimalware, and encryption through SSL certificates along with physical security within its hardware infrastructure. Additionally, Rackspace holds a slew of accolades in security prevention and expertise areas. Furthermore, Rackspace offers a 100% uptime service level agreement.

Buyya, Broberg, & Goscinski (2011). Cloud Computing: Principles and Paradigms. John Wiley & Sons.

Tom Bittman’s videos

Experiential Space

Each person defines the experiential space perspective differently. The perspective evolves from a person’s encounters with the surrounding environment. Each person views conditions or situations in a diverse manner. For instance, a high cliff may appear adventurous to a rock climber or dangerous to an acrophobic. According to Taut (1977), the structuring of worlds calls for intelligence (p. 10). The intelligence to which he speaks refers to the acts of seeing, hearing, smelling, feeling, etc. Although, it is much more. An architectural environment appeals to sight, but it lacks the sense of smell, sound, and taste. All senses must combine in order to complete the encounter or experiential space for the individual. The experiential space creates memories, passions, and bias. Each individual interprets the event in unique manners which are based upon the accumulation of experiences.

Some individuals struggle to manage their experiential space in real life and that’s all they can handle. While, others decide to experience virtual worlds. Virtual worlds simulate experiential space perspectives from the real world. However, this is difficult to do properly. Virtual worlds lack smell, taste, and touch. Visual designs and sounds must trigger an individual past experiences in the real world to allow them to image they are in the virtual world. Yet, virtual worlds provide opportunities that the real world can’t. For instance, avatars can fly without the aid of plane and amazing buildings can be constructed in hours/days instead of months/years. To create the positive experiential perspective, designers must analyze every aspect of the build, so the meaning can be intrinsically interpreted by the user.

As we interpret the experiential space, we may also consider the meaning of this in terms of machines and explore the dynamics of virtual machines. In the real world our experiential space perspective of a computer should be fairly similar. Yes, there will be arguments on which brand performs best along with a specific definition of a computer (desktop, laptop, netbook, tablet, phone, etc). However, when someone mentions computer, we have some type of an image in our head. As we examine, virtual machines our images will increase in variety. The reason cultivates from a similar origin as virtual worlds experiential space perspective. Virtual machines cannot be described with all of our senses. We cannot touch a virtual machine and we cannot see the virtual machine (we can see the GUI, but not the actual machine). Virtual machines are bound within physical machines, sharing resources such as hard drive space, CPU processing and memory. The issue arises that the physical machines could be spread around the world, so where do the virtual machines exist? The relative location of virtual machines lives, adapts and moves within physical constructs of machines, networks, and storage arrays.

An example of virtual machine moving from physical box to physical box involves replication of the virtual machine. Bose, Brock, Skeoch, and Rao (2011) explain the phenomena of virtual machines replicating across a WAN stretching continents. They solve a specific issue involving latency and how to improve the efficiency of moving virtual machines across a network. From this sense, the virtual machines live in the Internet and move from data center to data center. This cloud spider as they refer to it gives life to a virtual machine, a machine within a machine.

Bose, S. K., Brock, S., Skeoch, R., & Rao, S. (2011). CloudSpider: Combining Replication with Scheduling for Optimizing Live Migration of Virtual Machines across Wide Area Networks. In Cluster, Cloud and Grid Computing (CCGrid), 2011 11th IEEE/ACM International Symposium on (pp. 13-22). IEEE.

Tuan, Y. F. (1977). Space and place: The perspective of experience. University of Minnesota Press.

SaaS Views

Implementing Software as a Service (SaaS) into an existing business model can offer many advantages, but the entity must consider both the advantages and disadvantages of SaaS along with the overall effect of the implementation to their business. (2012) highlights the advantages of adding SaaS to a corporation’s business model. Some of the perks include cost cutting measures, scalability, data protection, guaranteed service, always upgraded, information sharing, flexibility, and usability. According to the blog, SaaS is a lucrative option for a sound and cost effective IT support solution. Buyya, Broberg, and Goscinski (2011) point out similar advantages: scalability, flexibility, ease of accessibility and configurability, robust, secure, and affordable. They also state that SaaS affords “… a seamlessly and spontaneously coexist, correlate, and coordinate with one another dynamically with dexterity to understand one or more users’ needs, conceive, construct, and deliver them at right time at right place. Anytime anywhere computing tends towards everywhere every time and everything computing” (p. 59).

On the other hand, Buyya, Broberg, and Goscinski (2011) mention that the disadvantages must weigh into the decision as well. They cite controllability, visibility, security and privacy, availability, performance, integration, and standards as possible challenges of the SaaS paradigm integration. The CloudComputingTopics blog echoes similar concerns with SaaS implementation: security, capital outlay, disaster recovery, and deployment. If a large business already has the infrastructure in place along with the customized applications and the solution works well, the experts suggest avoiding SaaS unless the entity is considering end of lifing the existing solution or looking to upgrade.
Within the SaaS paradigm, entities must examine both the advantages and disadvantages before adoption. To recap, small to medium businesses who struggle to afford and in house customized software or large businesses looking to upgrade should consider a SaaS solution. SaaS offers cost cutting measures from the purchase of overhead infrastructure to support software, flexibility of scaling the solution during the peaks and troughs of demand cycles, and guaranteed up time by the vendor. However for large business who already have an existing infrastructure, they may not elect to use SaaS, because the existing solution already meets their needs.

One final element to consider when investigating cloud solution involves awareness of space. Business requires a place to call home, even if the employees travel nonstop and are stationed across the world. The employees need to find a place to call home or to identify as I work for them. These identification mechanisms are the awareness of place. A SaaS solution adopted by an entire entity can provide this awareness of place. Taun’s (1977) chapter, Attachment to Homeland, discusses various groups and time periods and their attachment to their homeland. He provides examples and illustrates the need to identify with a place. This awareness of place grows strong in the human spirit and continues to exhibit the same characteristics in today’s society. Our awareness of place still embodies a physical place, but has also started to exemplify a digital reality. A common business web portal or SaaS solution that has been customized for a business can now be called our homeland. We feel secure to log into this portal to do work as we travel around the world to complete our jobs. Awareness of place holds a strong piece of the puzzle when deciding to replace an existing solution to a SaaS solution.

4 reasons why businesses might not want to switch to the cloud. (2012). Retrieved November 15, 2012.

Buyya, R., Broberg, J., & Goscinski, A. (2011). Cloud computing. Hoboken, New Jersey, United States of America: John Wiley & Sons, Inc.

SaaS – Why does your business need it? (2012). Retrieved November 15, 2012.

Tuan, Yi-fu. (1977). Space and Place: The Perspective of Experience. University of Minnesota Press. Minneapolis, MN.

Office 365

1. Discuss how it relates to one of the Montage spaces.
Microsoft Office 365 relates to all of the Montage spaces. Office 365 is starting to move up the ranks as a top cloud solution. With the release of Windows 8 operating system, Office 2013, and the Windows 8 phone, Office 365 has received a large boost. For the purposes of this question, I have chosen Private Space. The normal office software allowed an individual to create documents, spreadsheets, presentations, and take notes that would reside on an individual machine. Now with Office 365, a user can save his/her information in the cloud, where it is always backed up and available on any device with an Internet connection. Windows 8, Office 2010 & 2013, and the Windows 8 phone sync with Office 365. I can create on one device and read it from another. Office 365also offers the user a large amount of storage space. This private space affords users a virtual area to analysis, comprehend, create, imagine, and much more. Additionally, the private space provides the tools that most enterprise corporations and public generally use, such as Word, Excel, PowerPoint, Outlook, and OneNote.

2. Describe the characteristics that define its behavior and architectural style.
The fluidity of Office 365 allows customers to build a private, public or hybrid cloud to meet their needs. Microsoft can carve a section of Office 365 specifically for the company to make it a private cloud, which would leverage the companies own servers like exchange. Another option affords the company to launch their initiative as a public cloud. The final option allows companies to create a hybrid cloud. For instance, the company would leverage their exchange servers, but use the cloudOffice web apps.

Additional fluidity offers Office 365customers to use the service as a PaaS, SaaS, or IaaS. The customer can also combine the services. Companies no longer need to purchase Office software and continuously upgrade with Office 365. Users only need a browser to access Office. Additionally, companies can leverage Office365’s scalability, allowing them to cut cost on infrastructure. Finally, since Office 365 is web based, it will run on any operating system. Thus it also acts like a platform with its integrated SharePoint, web apps, and storage space.

3. Is today's implementation of this service the same as its original design and intent or has it evolved as users have found new ways to use it?
Since it’s only a few years old, Office 365for the most part has not evolved too much. It is currently being used as the designers saw it. However, there has been slight alterations to assist with collaboration efforts.

4. How has it evolved? Why has it changed? If it has not changed, why not?
Initially, Office 365 was a standalone cloud solution. With the release of Microsoft’s new product, Office 365 integrates with the new operating system, Office suite, and phone. This integration affords the users more flexibility, usability, and ownership. The operating system and Office integration provides a more robust solution to create and the Office 365 integration offers the storage in the cloud. With the phone, users can access their Office 365 content at any time.

5. How may it change to support the future? Make a prediction and discuss why it will happen.
Office 365 will continue to grow now with the release of Windows 8. The accessibility of Office 365 allows Microsoft to compete better with Apple. Phones, tablets, and pc’s can now sync their information and give instant access. Additionally, Microsoft has another front to compete with Google’s Documents (now Drive).

Office 365 will continue to improve in the areas of collaboration. Files can now be shared, hosted, and worked on together. The office 365 SharePoint is just a stepping stone to the future. I foresee a large expansion and connectedness with Office 365. Leveraging the idea of collaboration, I wouldn’t rule out Microsoft trying to increase its social media footprint too.


Workflow engines for clouds provide a unique solution for many corporations. A workflow offers a simplistic vantage point of a complex execution and management of applications. Processing and managing big data requires a distributed server farm and data centers. The emergence of recent virtualization technologies and the expansion of cloud acceptance have helped shift to a new paradigm in distributed computing. This distribution computing relies on existing resources for scalability computing.

Services within the cloud have opened new possibilities for vendors and corporations. The Infrastructure as a Service (IaaS) virtualization allows vendors to offer virtual hardware for intensive workflow applications. Platform as a Service (PaaS) clouds expose a high level development and runtime environment for building and deploying applications on the aforementioned IaaS. Finally, Software as a Service (SaaS) solutions give corporations the flexibility of leveraging their solutions to integrate into the existing workflows.

The background of the scientific workflows happens on infrastructures like OpenScience Grid and dedicated clusters. Existing workflow engine systems typically take advantage of these free Open Science Grids through some type of research agreement. The CloudbusToolkit workflow engine gives an example of the background of scaling workflow applications on clouds using market oriented computing.

The primary benefit of moving to clouds is application scalability. The elastic nature of clouds improves the process of adjusting the resource quantities and characteristics to vary during the application runtime. In other words, the resources scale higher when there is a higher demand and lower when there is less demand. With this capability, the services can easily meet Quality of Service (QoS) requirements for applications. This feature was never truly available in traditional computing methods.

With the change in dynamics from traditional to cloud, Service Level Agreements (SLA) has become the primary focus among both service providers and corporate consumers. Competition among service providers have driven SLAs to be drafted with extreme care in order to entice corporate consumers by offering specific niches and advantages over competitors. Cloud service providers also utilize economies of scale; providing computing, storage, and bandwidth resources at substantially lower costs.

The workflow system involves the workflow engine, a resource broker, and plug-ins for communicating with various platforms. To illustrate the workflow system architectura lstyle, we will examine scientific applications. Scientific applications consist of tasks, data elements, control sequences, and data dependencies. The components within the core are responsible for managing the execution of workflows. The plug-ins supports the workflow executions in different environments and platforms. The resource brokers populate the bottom of the diagram and consist of grids, clusters, and clouds.

The workflow system has great potential for growth in the future. Gartner has the paradigm ranked at the top of the hype cycle in 2010. There are still challenges to overcome, but the potential for growth exceeds imagination. Giants in the field like Microsoft, Amazon, Google own enormous data centers to provide these services to corporate consumers. Eventually, these consumers can start to form hybrid models where they select different sections of the vendor services to create their workflow cloud. The current workflow model is just the first step to offer customers a complete solution.

Buyya, R., Broberg, J., & Goscinski, A. (2011). Cloud computing. Hoboken, New Jersey, United States of America: John Wiley & Sons, Inc.