Analysis of System of Files Provided by IBM System I

The New File System

The file system is one of the most visible parts of the operating system. Almost all operating systems let users specify named objects, called files. Files hold programs, data, or anything else the user desires. The operating system also provides various APIs to create, read, write, destroy, and manage files. Moreover, most operating systems have their own unique and distinct file types. For example, UNIX has regular directories, files, and special block and character stream files. Regular files hold user data. Directories are essentially used to keep track of files and consist of information needed to supply files with symbolic names. Also, block and character stream files are used for modeling disk and terminal-like devices. Moreover, the very first application enablers added was the ability to store files from other operating system environments, like UNIX. At V3R1, the integrated file system (IFS) was introduced which not only created a consistent structure for all the existing file systems seen in the AS/400, but also created a new file system structure altogether needed for other operating systems. With passage of time, new file systems have been added. The IFS supported 10 file systems and three server types, by the time the iSeries was announced.

Features and Advantages

The IFS is said to be a part of OS/400, and provides a consistent structure for applications and users that were previously part of the AS/400. In addition, it provides support for the stream I/O files utilized by the PC and UNIX operating systems. Stream files, which contain long continuous strings of data, have become increasingly significant for storing images, audio, and video files (Soltis, 2001). Moreover, the IFS provides a common view of stream files that are stored either locally on the iSeries or a remote server. To manage all the files, a new hierarchical directory structure was created. This hierarchical directory permits access to objects by specifying the path to the objects, through the directories, in a way similar to the one seen in PC and UNIX file systems. This directory and file structure allows information such as UNIX stream files, record-oriented database files, and file serving to be handled through separate file systems or a common interface, depending on the applications needs. The main advantage and unique capability of the IFS is the ability to support separate file systems through a common, consistent interface. In the original S/38, it was easy to search for an object in the database by simply looking up the name in a library. A library supplied a means by which objects could be organized into distinct groups, and allowed objects to be found by a name. The similar library structure was incorporated into OS/400 (Soltis, 2001).

Libraries: Libraries are said to be OS/400 objects which are used for finding other objects in the database. Moreover, the library is arranged in a single-level hierarchical manner, unlike the directory structure found in UNIX and PC operating systems which show a multi-level hierarchy. Finding an OS/400 object needs the object names and the library along with the object type to uniquely identify the object. Exceptionally, a library cannot reference other libraries, which will, conversely, violate the single-level hierarchy of Library/Object. However, there is a unique and special library called QSYS that is used to reference other libraries (Soltis, 2001).

Shared Folders: Shared folders were brought into OS/400 primarily to support Office/400 functions. The S/36 was an efficient office system, and most of the concepts related to folders originated from this system. An OS/400 folders object was included to support these Office functions. In other words, the integrated Office support provided a filing system for all Office objects that contained data for an Office product. Documents, emails, programs, and file were amongst the conventional items that were present in this file system. Furthermore, document library services allow users to treat the filing system as an electronic filing cabinet furnished with document library objects in folders. Folder management services allow users to organize the objects in these folders. Additionally, folders may contain other folders and may be searched interactively. Therefore, along with the traditional Office items as stated above, shared folders contain stream, files for graphs, images, spreadsheets, PC programs, and PC files. Moreover, IBM replaced PC support with a product called Client Access that provided a platform for distributed client-server computing (Soltis, 2001). Initially, the primary challenge had been to determine how to construct a single file system that could integrate all file system types, since the separate file system was not designed to be compatible to each other. As previously discussed, the single-level-hierarchy OS/400 library structure is a subset of the one used in PC operating system with different names but similar structure. The PC operating systems contain files instead of objects, and a library is called a directory in which files exist. Unlike the OS/400 library structure, directories which exist within directories on the PC are usually called sub-directories. This arrangement, thus, creates a multi-level hierarchy for PC file naming, as opposed to the single-level structure employed by the OS/400 library. A PC file name takes the form DIR1DIR2&..DIRnFILENAME. This is a superset of the OS/400 library naming structure, except for the backward slashes (). Likewise, a UNIX file structure is a superset of the OS/400 library structure. Moreover, a UNIX file system allows multiple paths to the same object. The solution for combining all these file systems was to use a single root from a UNIX-like file system and to place all others under this root. A path name then takes the form DIR1/DIR2/&DIRn/FILENAME. The lengths of the file and directory names in IFS are increased to match the UNIX-based open system standards, like Posix and XPG. In OS/400, the names in the IFS directories are stored in a format known as Unicode, which is an international standard supporting multiple languages, which includes the double-byte character sets used by many nations (Soltis, 2001). The data format must be compatible with the application that requests it, because the IFS allows accessing the data, or the data must be converted to a compatible format. OS/400 manages the conversion between data formats such as Extended Binary Coded Decimal Interchange Code (EBCDIC) used for native iSeries applications, and the American Standard Code for Information Interchange (ASCII) in the PC world. All the files and file systems in the IFS are treated like OS/400 objects. Moreover, file system supports the same set of operations, described for each file in a standard operations table. Each file system possesses its own set of logical structures and rules to interact with information in storage, and these structures and rules differ from one file system to another. By employing a table approach, new file systems can be introduced to the IFS without requiring changes to existing file systems or the operations corresponding to those file systems. Furthermore, the IFS considers the original library support and original folders support as separate and distinct systems; along with the other file support that has varying capabilities. With this structure, new file systems may be integrated into the IFS as and when required. The virtual file system architecture provides a common interface to every file system in the IFS. VFS is an object-oriented interface which provides objects called as nodes which are abstract objects that represent the real objects stored in the file system. The nodes allow a collection of abstract operations to be defined that can be performed on real objects (Soltis, 2001). The UNIX-, Windows-, and OS/400-based servers allow the iSeries to act as both client and server when sharing data over a network. Additionally, the IFS supports the xSeries servers, either remotely attached to the iSeries or integrated into it, and Linux server running in an iSeries partition.

NFS Server: The NFS file system and NFS server let the iSeries act as both client and server by allowing remote file systems or directories to be mounted locally on a PC, a UNIX workstation, or the iSeries. The NFS file system allows users to access data and objects stored on a remote NFS server. Through the NFS server, the iSeries can export a network file system. This server also lets data to be exported from any of the following file systems: Root (/), QSYS.LIB, QOpenSys, QOPT, and UDFS. Communications between client and server are, therefore, accomplished through the use of Remote Procedure Calls (RPCs) (Soltis, 2001).

OS/400 NetServer: OS/400 NetServer allows Windows PCs access IFS directories and OS/400 output queues, with TCP/IP configured on both the iSeries and PC. The PC users access the shared information via Windows Network Neighborhood. The PC user who is connected to the iSeries views the IFS as a disk drive containing directories and objects, and can operate with files in the IFS by utilizing either the file-sharing clients built into Windows and NetWare or Client Access Express for Windows. Major changes have been made to the Client Access support of the iSeries. Previously, the Client Access family of products used proprietary support for printers and network drives. These products are now replaced with two new products that do not require the proprietary support, i.e. iSeries Client Access Express for Windows and iSeries Access for Web. The iSeries Express client uses in-built features of the Windows OS to communicate with OS/400 NetServer for network file and print access. By using the Express client, users can access files using Client Access SPIs through the File System in Operations Navigator (OpsNav). Directories may be created, removed, and renamed when working with the IFS under File System in OpsNav. Moreover, drag-and-drop functions are also available in the Express Client under Integrated File System with the Client Access SPIs. This is particularly important to take advantage of Secure Socket Layer (SSL). The SPIs support SSL while OS/400 NetServer does not (Soltis, 2001).

IFS objects can be accessed using either OpsNav or CL commands, the command language for OS/400 that was carried forward from S/38. Moreover, all the APIs used in the IFS are thread safe when they are directed at an object in a thread-safe file system. QOpenSys, QSYS.LIB, QOPT, QNTC, and user-defined file systems are all thread safe. Also, DataLinks extend the types of data stored in a database file. DataLink columns in the database are used for holding references to non-database files that are stored in the local IFS, remote IFS, or in the file system of any attached UNIX or Windows server that has IBMs DataLink Manager installed. A DataLink is used to specify the objects file location rather than storing the data object itself in the database columns. Furthermore, DataLink support lets the user assign directories in the Root to hold data link objects. Once a directory is set as a DataLink directory, all objects in that directory are accessed via the DataLink File Manager, i.e. DLFM. Therefore, the IFS is a fundamental application enabler for the iSeries. It enhances the existing data management capabilities of OS/400 and extends these capabilities to support the emerging new application environments (Soltis, 2001).

The New I/O

The I/O subsystem diffuses almost every part of a computer system. Though it is considered to be only one-third of the processor, memory, and I/O complex, I/O is considerably larger than either of the other two. I/O configuration requires more hardware and more lines of operating system code, as compared to anything else in the system. Briefly, the I/O sub-system constitutes of the group of components, both hardware and software, responsible for processing input and delivering output to various types of devices attached to the system. Whenever a user requires any system resource, such as reading from or writing to a file, requesting instructions of a program to be executed, requires another system object, creating or destroying an object, and that resource has not already been brought into memory, the computer goes through the I/O sub-system to retrieve or store, create or destroy the resource. Although I/O impacts every level in a computer system, it is usually relegated to second-class citizenship (Soltis, 2001). Along with the memory subsystem, the I/O subsystem finds out the response time and throughput for most computers. With the approaching time, when computers from low-end PCs to the fastest supercomputers will use the same microprocessor building blocks, I/O capabilities may be the only feature that distinguishes one computer system from another. Therefore, it can be debated that, as many are now starting to do, I/O is the most important component in the system. The I/O subsystem residing in the iSeries has undergone major transitions in the past few years, so it is appropriate to examine those changes to see how they will affect IBMs future system designs. The iSeries uses a hierarchy of components for the attachment of I/O, which includes I/O buses, I/O hubs, I/O bridges, I/O processors (IOPs), I/O adapters (IOAs), and lastly, I/O devices and network connections. The history of I/O hubs subsystem designs in the AS/400 and S/38 is partially technical and partly political. For both systems, the I/O designs were heavily influenced by the factors outside the Rochester development lab (Soltis, 2001).

Pre-history: The S/38

The S/38 employed an I/O channel that was similar to an IBM System/370 (S/370) channel, in many ways. A channel can be thought of as a specialized computer built alongside the main processor that is designed to offload the I/O processing from the main processor. A channel has its own instruction set designed especially to interact with the bus-attached I/O adapter and to transfer data between the memory and I/O devices. The main processor passes the channel hardware to the channel programs which then executes simultaneously with other executing programs in the main processor. When a channel finishes with its program, it interrupts the main processor to get more work. The main reason behind S/38 using a channel was because of the people involved. Many new people had to be brought into Rochester to design and build the S/38 (Soltis, 2001). Most of them came from other parts of IBM, and since the designers of the I/O hardware had previously operated S/370 channels, they decided to use a variant of a design they were familiar with. However, unlike the S/370, which supported multiple channels, the S/38 had only single channel. The second channel was designed but never implemented. The S/38 channel could be described as a block-multiplexer channel operating in a fixed burst mode (Soltis, 2001). For the very first AS/400s, the new I/O structure was developed. Unlike a channel, where the I/O intelligence is placed alongside the main processor or is even part of it, the new structure distributed the intelligence throughout some specialized IOPs designed to perform I/O operations. The concept of using separate processors for I/O had initially been adopted by System/36 (S/36), so the concepts were well known to Rochester developers. A fundamental part of this new I/O structure was the System Products Division (SPD) bus. During the 1980s, IBM corporate management was pertained to the fact that IBM had too many midrange computers all competing for the same customer market. To solve this problem, management moved the responsibility for five distinct midrange systems into one development division, i.e. the SPD. Two of those systems, the S/34 and the S/38, were products from Fortress Rochester. Soon, a new project with the code name Fort Knox was started in SPD, to converge all five systems into one system. The various parts of the new system would be built in different locations that were part of SPD (Soltis, 2001).

From 1988 through 1997, the system I/O bus used in all AS/400s was the SPD bus. Moreover, specialized hardware generated the signals for one or more SPD buses and handled the transitions from the high-speed system memory bus to these SPD system I/O buses. The SPD bus has itself undergone some various updates over the years, but otherwise it is a packet-oriented bus with enhancements that allow efficient streaming of data. Packet-oriented data transmit messages in small blocks, known as packets. Streaming entails continuous blocks of data can be sent, which permits higher data rates to be achieved on the bus. In a streaming mode, the SPD bus can support data rates of 25 to 36 megabytes per second (MB/sec), depending on direction and contention on the bus. Furthermore, the SPD bus supported a parallel copper implementation that could be used inside a single enclosure. This parallel copper bus had 32 lines to transfer data, 8 lines to transfer command and status information, 8 lines for origin-destination identification, and several other control lines. When the AS/400 system was brought in, in 1988, the parallel copper SPD bus was the only one available. Afterwards, a serial optical version of the SPD bus was introduced, which could go outside the single enclosure. Moreover, this serial optical bus was able to cover distances up to 500 meters and even two kilometers in certain environments (Soltis, 2001). Connected to the SPD bus were the IOAs and IOPs that worked together to perform several functions. Additionally, they moved data between the systems main memory and the I/O devices as well as network connections. They transposed data from one form to another and provided a buffer for high-speed devices like disks for data required by the processors. Earlier, the implementation of these I/O functions utilized dedicated-function IOPs. However, the exception was the multifunction I/O processor where multiple functions could be executed on one IOP. Since most IOPs were dedicated to a single function, the various adapter cards which contained the IOPs and IOAs tended to share very less. As a result of this, numerous different processors were used for the IOPs mostly from the Motorola 68000 family; and many unique AS/400 interfaces to devices were established/ created. A good example of these unique AS/400 interfaces are SCSI, SCSI-2, Workstation, LAN, FDDI, ATM, WAN, Token-Ring, ISDN, Frame Relay. In short, standardization was not a higher priority (Soltis, 2001). Rather than totally altering and immediately phasing out the SPD bus and all SPD I/O adapters, the changes were staged over four releases. This technique would allow customers to migrate to the new AS/400e models and move many of the SPD I/O adapter cards to the new system. Furthermore, a transition I/O structure was created to support both SPD and PCI adapter cards for the next few releases. Soon the necessity arose to increase the bus bandwidths beyond the SPD buss capability since customers demanded for tremendous increases in processor, I/O device, and the interconnection network speeds, and to prepare for the likes of 1 to 2 gigabit/sec Fiber Channel buses and 1 to 2 gigabit/sec. The result of this effort is the RIO bus, which consists of a pair of byte-wide unidirectional point-to-point buses, each one containing 8 data lines, one clock line, and one flag line. Another major change during this transition period focused on the IOPs. Starting with V4R1, all new IOPs were being designed to have a more uniform structure. Apart from PowerPC processors, the IOPs use the 32-bit 603 and 740 family of PowerPC. Unlike most of the previous IOPs, these support the attachment of multiple IOAs (Soltis, 2001). Before V4R1, the internal buses of the IOPs were based on the Intel 960 bus, the Motorola 68000 bus, the IBM Micro Channel, or the proprietary magnetic media bus architecture. Since V4R1, all new IOPs now use only the PCI bus. Furthermore, the PCI bus architecture allows a shared bus to be used for interconnecting the I/O components in the server. When one device is using the bus at a given instance of time, no other device can use the same bus at that time. Though multiple PCI buses may be used in a server, however, the shared bus architecture restricts the total I/O bandwidth that can be achieved. With InfiniBand, the shared bus is replaced by high-bandwidth, high-speed, point-to-point connections between devices. Additionally, InfiniBand is known as switched fabric architecture because it uses switches for making point-to-point connections, similar to the way switches are used in the iSeries memory subsystem. Moreover, the InfiniBand architecture is also called channel architecture because of its similarity in the concept to the S/370 channel that inspired the I/O structure in the S/38. Also, because InfiniBand is a channel, it has more intelligence built into it than a PCI bus does.

iSeries and the Internet

IBM has placed iSeries as the best hardware platform for the Internet-based applications by offering software development and management tools like the WebSphere, as well as creating a multifaceted iSeries environment called the Integrated File System (IFS). Server software is written to run in a UNIX or Windows operating system environment. In other words, servers expect an environment wherein there are directories, subdirectories, and stream files, and not libraries, objects, and members. But in reality, everything in the iSeries native environment may be classified as a library, object, or member (Janson, 2007). In other words, there are many alternative system environments on the iSeries. The iSeries supports and affirms many universes, of which the native environment is only one. The IFS encompasses all storage on an iSeries and is comprised of several directories of which the Root directory (/) is the parent or primary directory. Everything stored on the iSeries is placed in the Root directory. Directories are similar to libraries, i.e. they are used to organize files. However, unlike libraries, directories can have subdirectories. The Root already contains a series of predefined subdirectories that support other common computer system environments. For example, QOpenSys subdirectory operates like UNIX system environment, and QSYS subdirectory encompasses the iSeries native environment. Moreover, each of the separate environments has its own rules regarding storing information and executing programs (Janson, 2007). Data or a program stored in the Root or QOpenSys is stored as a stream file; there are many variations amongst these predefined directories. For example, Root is not case sensitive and QOpenSys is. In addition to this, the IFS also supports a current directory. The current directory depicts a current library in the native environment. Whenever a user does not specify a directory, the system uses the current directory, which may be set to the Root or created by the user.

Working in the IFS: Studios Remote System Explorer (RSE) provides a tree diagram interface to all the directories and files of IFS. The iSeries native environment supplies IFS commands that allow users to work with any object irrespective of which directory is it stored in. These commands can be implemented from the command line on any menu or screen just like CL commands. Furthermore, prompting will also work on these commands. The workhorse of these commands is WRKLNK, i.e. Work with Object Links. WRKLNK displays a list of object names, i.e. links that are contained within a specified subdirectory. Thus, when issuing the WRKLNK command, the user is required to identify the path to be displayed. For example, to display all the objects within YOURLIBXX, the full IFS path of the Root, QSYS, and YOURLIBXX should be specified. The syntax for the WRKLNK command would be: WRKLNK OBJ (/QSYS.LIB/YOURLIBXX.LIB/*) (Janson, 2007)

In the above command the QSYS and YOURLIBXX need types of LIB, and the file-naming convention of separating the object name and type with a period is required. Additionally, the path is enclosed in single quotes, and the Root is indicated by the first forward slash. Also, a forward slash is used to separate the subdirectory from its parent directory; an asterisk must also be specified. The asterisk entails all the objects within YOULIBXX are displayed. Subsetted lists may be displayed by using asterisk(s) to replace the object links names, types, or portions of the name and types. For example, specifying *.file would display only file objects, while specifying /A would display object links in the Root directory that have names beginning with A only. The Work with Object Links display screen functions exactly like the Work with Objects Using PDM screen. Functions are carried out by specifying options to the left of the object links or pressing a function key (Janson, 2007). Server software on an iSeries is usually installed in the Root or QOpenSys directory. Each server requires Web pages to be stored in a certain directory. In this case, the server requires all Web pages to be stored in the following directory path:

  • QIBM/ProdData/HTPP/Public/HTTPSVR/HTML/
  • The IP address of the iSeries followed by that path and the file name can be, for instance, given as: 123.456.789.3/QIBM//ProdData/HTPP/Public/HTTPSVR/HTML/coolpage.html

As far as typing mistakes made by the user before entering the address, is concerned, fortunately, nicknames for these paths can be established. These nicknames are configured or defined with the WRKHTTPCG command, i.e. Work with HTTP Configuration. Issuing this command displays a screen with all sorted of configuration information. Configuration information contains both required information and the preferences. When the HTTP server starts, the configuration information is read and the HTTP server performs all the necessary set-up tasks that are defined in the configuration information (Janson, 2007). In the above example, the server is required to set-up a path nickname. The Pass command assigns a nickname to an existing directory path, and when added to the configuration information, the following Pass command will establish /sample as nickname for the servers Web page path: Pass/sample/* /QIBM//ProdData/HTPP/Public/HTTPSVR/HTML/*

Many commands may be too long to fit on the Work with HTTP Configuration screen. To display and edit a line, a 2 is typed in the option area to the left of the line and ENTER is pressed. The Change HTTP Configuration Entry screen is displayed with the full command. This Pass command creates a nickname of / for the Web page stored the Welcome.html file (Janson, 2007). The Welcome.html file comes with HTTP server software and contains a sample Web page. The Pass command entails the Welcome.html file will be displayed when the URL or IP address is specified. In short, this Pass command establishes Welcome.html as the default home page for the server. Furthermore, Pass command can be added to make / sample as a nickname for the default Web page path. To add a line, a 3 is typed in the option files on the blank line on the Work with HTTP Configuration screen, and a sequence number is entered where the new line is to be added in the configuration data. Restarting the HTTP server will establish the user-friendly path name of /sample for the path /QIBM/ProdData/HTPP/Public/HTTPSVR/HTML

There are some ways to move a Webpage to the correct directory on the iSeries. For example, SEU could be used to enter the HTML source code into a member, and then the CPYTOSTMF command could be used to move pages to the correct IFS file. However, Studio is infinitely easier. Publishing a page is called Exporting in Studio. For the said example, the StudioPage.html is to be exported to the iSeries directory path of /QIBM/ProdData/HTPP/Public/HTTPSVR/HTML. To do this, first the file StudioPage.html is selected in the Navigator tree. FILE and then EXPORT is clicked on the screen to display the Export window. At the Export window, the Remote File System option is selected and the Next button is clicked. At the Export RFS window, the directory path may be typed in the Folder textbox. The path can also be specified by using the Browse for Folder window, which provides a navigation tree for all external systems identified by WebSphere. In this case, the user needs to expand on the iSeries entry and drill down the path of /QIBM/ProdData/HTPP/Public/HTTPSVR/HTML (Janson, 2007).

Types of files in the IBM System I

Keyed Files: Storage devices physically access the storage media either sequentially or directly. Direct access entails that the storage media or media can go directly to any storage location and read the information stored there. Sequential access means that the storage device should go through all storage location that physically precedes the storage location being sought (Janson, 2007). Tape devices and computer disks provide the same type of access to data records. When accessing a particular data record, however, there needs to be a way to identify that record. That is, each record is allocated a number based on its relative position within the file, called the relative record number. This is the number DFU that has been used to identify records. Relative record numbers can work wonders for CDs with small number of songs, for instance, but with data files containing thousands of records, it is difficult to keep track of each records record number. The most common method of identifying records is with a primary or unique key. Earlier, the primary key controlled the physical location of data records. Also, there could be many unique keys but only one primary key. The iSeries allows one to define a single primary key for a physical file. However, the primary key does not control the location of data records. Files may also have secondary keys. These fields do not essentially have a unique value for each record. For example, the student file may have the field eye color as a secondary key. Finding the file based on a particular eye color will not result in finding a single record, i.e. the search would result in multiple records. Thus, non-unique secondary keys cannot be used for identifying a particular record (Janson, 2007). Sequential files store records in a sequential manner, one after another in key order. Access to a sequential file is limited to the sequential access method, i.e. each record is processed in the order in which it is placed in the file. Such type of files can support access to the file in only one key field order. This is the main reason why sequential organization is never used. Furthermore, keyed sequential files allow each record to be uniquely identified. However, retrieving a specific record means that each record is read and the key fields value is checked. In an indexed sequential file, each record contains a unique value for the key, records are grouped into blocks, and each block contains records for specific range of key values. Within the block, the records are stored in a sequential manner in key order; blocks do not have to be in key order. The main reason behind this organization is the concept of index, which is a file that contains an index record for each block of data records. Additionally, the index record contains two major pieces of information: the highest key value contained within the block, and the starting storage location/ address of the block. Moreover, the index records are stored in key order (Janson, 2007). Also, the index contains only the key fields and the storage location. While accessing an indexed sequential file for a particular record, the first step is to sequentially search the index. Each index records key value is checked against the key value that is being searched for. If the index value is greater than the value being searched for, the block containing the data record is found. The system then goes directly to that block as indicated by the storage location in the index record, and reads each data record sequentially in the block, looking for the accurate key value. Space utilization in the system memory can be improved by using a more complicated calculation called a hashing routine. There are several types of hashing routines but what characterizes them all is that they generate a smaller range of addresses. A typical example is the division/remainder method wherein the number of storage location that will be in the file will be determined. Then, the largest prime number is chosen that is less than the number of records to be stored. This method is based on the fact that the number of possible remainders generated by division is equal to the divisor (Janson, 2007). Each operating system or file management system implements the file organizations and access procedures, in a unique manner. Also, they generally have their naming conventions. For instance, the iSeries supports access to files in the form of arrival sequence order. That is, a user or program can access records in the order in which they arrive into a file. As far as access is concerned, arrival sequence is similar to sequential access. However, the iSeries does not store records in arrival order. It employs sophisticated algorithms that calculate the most efficient utilization of available space and resources and may even scatter records across physical storage devices. The iSeries provides arrival sequence access through an access path. This access path can be thought of as an index. For example, an arrival sequence access path that supports only sequential access to a file could simply consist of a list of storage locations. To access the file, the access path would be searched sequentially and the records read directly (Janson, 2007).

Logical Files: A Database Management System will usually provide three ways of viewing a database, i.e. physical view, global view, and the ability of building multiple user views of the data. Because of the iSeries single level approach to storage, the physical view of data is mostly hidden from the iSeries. One need not know the disk, track, or sector on which data is stored. Additionally, any specific location addresses or indices are used internally by the DBMS and are not readily available to the user. However, the iSeries does not offer a limited physical view of the data, with the DSPPFM command, along with some capability to manipulate the physical organization of the data, with the RGZPFM command. DDS does, however, provide a global view of all the data as files as well as it allows construction of individual views. This is obtained through both logical and physical files. Physical files contain the definition of individual fields within the file and include an access path to the data (a key) and the data itself (Janson, 2007). These definition pieces taken as a whole provide a global view of all data on the iSeries. Users often want to see data in a form other than how it is organized. The iSeries employs another type of object called a logical file that provides an alternative access to data in physical files. Like physical files, logical files contain file definitions. However, logical files contain only field definitions and an access path, and no data is stored in a logical file. Since logical files do not contain any data, they must reference data already defined in a physical file. But a logical file can reference field from many physical files.

Through logical files, unique combinations of data seem to exist without duplication of any data (Janson, 2007). Logical files are defined with DDS and are created by compiling the member of the source physical file that contains the logical files DDS definition, similar to physical files. To create a logical file, first a member with a type of LF is created, and then the logical files DDS source definition is entered. And finally, the DDS is compiled to create the logical file. Since logical files do not contain any data, they simply point to data contained in physical files (Janson, 2007). There are several benefits offered by logical files. Firstly, logical files can be used to easily generate customer reports. This relieves the programmer of having to write and maintain large reporting system. The programmer can greatly reduce the number of report programs through logical files. The specialized access provided by logical files to physical files also simplifies program input and output logic. Because logical files are defined externally, they can also be used by many programs. Without logical files, every program would have to contain the duplicate program code to access the separate physical files. Lastly, logical files do not create any duplicate data (Janson, 2007).

Reference List

Janson, R. (2007). Introduction to the iSeries and Websphere Studio Client. USA: Shroff/ Janson Publishers.

Soltis, F.(2001). The Inside Story of the IBM iSeries. Colorado, USA: 29th Street Press.

Posted in IBM

Big Data Management: Looker, IBM, Oracle and SAS

Looker

Looker is one of the companies that provide superior systems for guiding companies to benefit from the concept of Big Data. This organization introduces the technology by producing and marketing its Looker app to different customers. This is a powerful platform designed to connect customers and business entities. Companies can utilize the app to monitor their clients in a flexible and scalable way (Tunguz & Bien 2016). The system is easy to secure and capable of supporting decision-making processes.

Lookers app presents a number of changes in the manner in which companies use Big Data to meet their clients needs. The first one is that the system guides business leaders to make evidence-based decisions and engage in data analytics. The next one is that it provides a complete view of customers needs, expectations, and experiences (Tunguz & Bien 2016). Another change is that the system has a pre-built design or app that is aimed at improving organizational performance. Firms can use it to link different units or departments to deliver positive results.

Many businesses embracing the use of the Looker app have managed to record a number of benefits or achievements. For instance, they have improved their decision-making procedures by getting real-time information about customers behaviors and expectations. The use of the technology also guides HR managers to consider the questions and issues their employees present after interacting with different clients (Tunguz & Bien 2016). The app also improves communication between departments and units. Lookers solution has made it easier for companies to implement changes informed by customers changing demands. The tool maximizes collaboration, participation, and communication.

The leading customers for this technology include learning institutions, emerging businesses, and government agencies. The majority of them support the solution since it can transform problem-solving, decision-making, and customer satisfaction initiatives (Tunguz & Bien 2016). Different stakeholders have also presented positive views regarding the use of this solution in different settings. Looker should, therefore, consider emerging issues and suggestions to improve the app continuously.

IBM

IBM is one of the corporations that take the issue of modern technologies seriously. It has partnered with Hortonworks to introduce superior Big Data systems that ensure that diverse data is analyzed and differentiated into unstructured and structured information. It considers the concept to achieve most of its goals. Using its existing cloud platforms, IBM has created complex systems that firms can consider to manage information, gather data, and analyze it to make superior business decisions (Chen, Argentinis & Weber 2016). The corporation has also become a leader in the production and marketing of customized Big Data solutions.

Companies that embrace this technology from IBM have managed to support several changes, including streamlining their operations and processes (Chen, Argentinis & Weber 2016). They have been able to solve emerging problems and produce superior products that resonate with the demands of the targeted customers. They also introduce new change models using the solution.

Businesses that utilize IBMs Big Data technology improve their operations by monitoring all emerging changes and competitive areas in their industries. This means that the technology has been of significant value for introducing new transformations. The technology has also streamlined decision-making, problem-solving, and service delivery processes. The introduction and use of advanced IBM Big Data solutions is a move that supports organizational change. This means that the company has transformed its processes and practices, thereby adding value to the targeted customers. According to Chen, Argentinis, and Weber (2016), this technology makes it possible for IBMs clients to fight all forms of cybercrime.

The use of BIG Data has led to the production of superior systems that can meet the needs of all clients. This is a clear indication that the targeted customers have continued to perceive the use of Big Data positively (Chen, Argentinis & Weber 2016). Many stakeholders are also pleased with the idea since it ensures that they remain competitive in their industries.

Oracle

Being a leader in the data technology industry, Oracle has managed to produce and market Big Data solutions to different companies. With the presence of powerful and advanced cloud systems, this corporation has developed superior applications for extracting acquired data from different units and divisions (Abellera & Bulusu 2018). Many companies have employed competent programmers and professionals to handle such solutions and present desirable inferences.

Firms utilizing Big Data from Oracle have managed to record several changes. The first one is the manner in which customer information is gathered and analyzed. The second change is the way employees collaborate and analyze data (Abellera & Bulusu 2018). Another transformation is the ability to introduce superior work cultures, organizational structures, and practices.

The continued use of Big Data at this corporation is an approach that has added value to many companies business processes and organizational change. For instance, firms can apply this technology to understand customers emerging behaviors, expectations, and preferences. They also utilize it to monitor trends in the industry, thereby producing superior solutions and products that can meet their clients needs (Abellera & Bulusu 2018). The strategy has also streamlined the way companies handle customers complaints and issues. Similarly, the technology has supported a new change focusing on business performance. This is true since Oracle has developed a superior model for meeting customers needs and delivering organizational goals.

With such initiatives, many customers perceive Oracles Big Data solutions positively since they receive more high-quality services and IT solutions than ever before. Some have also acknowledged that there is a need for firms to continue monitoring emerging technologies from Oracle in order to address their demands. Similarly, all stakeholders have been supportive throughout the process (Abellera & Bulusu 2018). According to them, Oracles Big Data system is a powerful model for maximizing profitability, improving performance, and supporting the needs of community members.

SAS

SAS is another company that understands the importance of data analytics and how the technology can make a significant difference for many companies and small businesses. This corporations Big Data technology focuses on the best analytical procedures that customers can implement to ensure that the collected data and information makes more sense (Pope 2017). Customers can, therefore, purchase an analytical platform that is best suited to meet their unique expectations.

SASs approach to Big Data has led to numerous changes in many sectors and industries. For instance, companies can now its analytical tools to study and monitor complex business scenarios and make positive inferences. This solution also offers timely responses and decision support systems (DSSs) that guide managers to monitor customers complaints, suggestions, and expectations (Pope 2017). Consequently, SASs Big Data solutions have revolutionized the way companies pursue their business objectives.

As described above, SASs Big Data technologies have added value to business processes and organizational changes. For example, companies that implement them can gather adequate data, analyze it, and use the acquired ideas to support superior strategies. Companies that implement such technologies will streamline a wide range of processes, coordinate operations, and improve communication (Pope 2017). Every form of data is applied or analyzed to transform business performance. Companies using SAS data analytics solutions find it easier to support new organizational changes, thereby delivering timely and positive results. The model can also guide business leaders to introduce better systems and practices.

Businesses currently using SAS Big Data systems have appreciated them since they deliver the intended objectives. Customers can make timely decisions, improve their clients experiences, and monitor suspicious activities. The cloud-based solutions also support the decision-making process. Stakeholders have perceived this technology in a positive manner since it adds value to them (Pope 2017). SAS has achieved its goals while at the same time fulfilling its stakeholders expectations.

Future Scope

The concept of organizational change is relevant since it guides companies to introduce superior ideas and procedures for adding value, increasing competitive advantages, and empowering their clients. However, managers planning to transform their corporations must consider the future of technology. The case of Big Data shows that modern organizational changes would be impossible or unsuccessful without involving these key stakeholders: customers, employees, community members, and shareholders (Matthias et al. 2017). The idea of data analytical solutions describes how firms can collect adequate data from different individuals and use the acquired information to make informed decisions.

The insights and ideas gained from such systems will be considered to propose the best steps and procedures for implementing the intended change. This means that more companies and entrepreneurs will monitor emerging technologies and combine them with the existing strategies in an attempt to implement the targeted transformational initiative. In the future, innovative ideas and solutions will be capable of guiding all aspects of organizational change. Through the use of emerging technological solutions, more companies will be able to monitor and understand their customers demands. They will also appreciate the ideas and trends experienced in their respective sectors (Halaweh & Massry 2015). With Big Data in place, such corporations will introduce superior systems and processes that can improve performance. Such attributes will be implemented using evidence-based change models.

Resistance to any new transformation is a common occurrence in many organizations. Modern technologies can guide leaders and managers to gather data from different channels or processes. The collected information will reveal the unique reasons why some employees might be willing to object to a proposed change initiative. With proper analytical or data solutions, managers can address such difficulties and support the intended change (Eine, Jurisch & Quint 2017). The ultimate objective is to ensure that the introduced transformation adds value to the company and makes it more profitable.

Finally, any form of change will succeed if managers combine it with existing or modern technologies. This means that leaders will involve technical experts to guide the process and solve emerging problems (Hassani & Gahnouchi 2017). The introduction of modern technologies can be treated as a change that is capable of streamlining organizational performance and delivering high-quality products and experiences to all key stakeholders. In conclusion, firms should acknowledge that technological innovation is the future of all organizational changes if they are to achieve their potential and remain relevant in their respective industries.

Reference List

Abellera, R & Bulusu, L 2018, Oracle business intelligence with machine learning: artificial intelligence techniques in OBIEE for actionable BI, Apress, New York.

Chen, Y, Argentines, E & Weber, G 2016, IBM Watson: how cognitive computing can be applied to big data challenges in life sciences research, Clinical Therapeutics, vol. 38, no. 4, pp. 688-701.

Eine, B, Jurisch, M & Quint, W 2017, Ontology-based Big Data management, Systems, vol. 5, no. 3, pp. 45-58.

Halaweh, M & Massry, AE 2015, Conceptual model for successful implementation of Big Data in organizations, Journal of International Technology and Information Management, vol. 24, no. 2, pp. 21-34.

Hassani, A & Gahnouchi, SA 2017, A framework for business process data management based on Big Data approach, Procedia Computer Science, vol. 121, no. 1, pp. 740-747.

Matthias, O, Fouweather, I, Gregory, I & Vernon, A 2017, Making sense of Big Data  can it transform operations management? International Journal of Operations & Production Management, vol. 37, no. 1, pp. 37-55.

Pope, D 2017, Big data analytics with SAS, Packt Publishing, Birmingham.

Tunguz, T & Bien, F 2016, Winning with data: transform your culture, empower your people, and shape the future, Wiley, New York.

Posted in IBM

IBM WebSphere Model and Products

Overview of IBM WebSphere Model

WebSphere is the connecting software for resources integration and designing of service-oriented infrastructure. This term is, as a rule, used to refer to a specific product of IBM Company (IBM WebSphere Application Server). WebSphere refers to the category of middleware which is intermediate software allowing performance of electronic business applications at different platforms on the basis of Web technology. IBM WebSphere comprises a set of products with the goal to help developers deploying Web applications (Hansmann, 2005).

IBM WebSphere includes a wide range of products, such as, CICS Transaction Server, WebSphere Business Monitor, WebSphere Commerce, WebSphere Edge Server, WebSphere Portal, WebSphere Product Center, WebSphere Customer Center, and a number of other products which are necessary for conducting electronic business. From a central console, systems administrators can configure, monitor, and manage business integration software across host and distributed platforms (Spencer, 2004). Certain WebSphere products allow modeling of corporate business processes, connecting with the systems of customers and business partners, direct monitoring of business processes, applications integration, and management of effectiveness and optimization of business processes.

Different application servers utilize Java Messaging Service (JMS) possessing the capabilities of messaging. JMS and the additional messaging features of WebSphere 5.0 provide further evidence for the fact that the asynchronous and synchronous programming models are both required to build next generation applications (Francis, High, Hernes, Knutson, Rochat, & Vignola, 2002). The application server is needed for giving these functions to the environments with incorrect infrastructures, as well as for those infrastructures which require a more tight integration and management which can be guaranteed by application server. WebSphere application server consists of numerous server offerings possessing typical for them functions and capabilities. This allows WebSphere Application Server to address a broad spectrum of solutions, ranging from the most rudimentary web application to transactional and scaleable e-business applications (Francis et al., 2002).

Advantages of WebSphere from an IT and business perspective

One of the greatest advantages of using WebSphere from the IT perspective is a wide range of products of one and the same family. It is a common knowledge that products designed by one and the same company will perform better together than products from several manufacturers. WebSphere has a complex set of products which, if installed together, can ensure proper carrying out of business processes.

From a business perspective, products offered by WebSphere allow conducting business operations on a high level ensuring at the same time reliable connection with business partners and consumers around the globe. One of WebSphere products, namely, WebSphere Process Server, can be helpful in operating business processes. This program complex serves for the support of solutions created on the basis of service-oriented infrastructure; it is used for realization of complicated business processes, as well of traditional business integration, such as, for instance, application integration on an enterprise scale. WebSphere Process Server is based on another product of this family, WebSphere Application Server Network Deployment, and contains all its advantages, namely, clusterization, high operating ratio, and built-in capabilities of messages and transactions management.

Other WebSphere Products Contributing to Advantages of WebSphere for Business

WebSphere Business Monitor

Another WebSphere product which can be helpful in business is WebSphere Business Monitor. This Web application provides the users with instrument panels which allow monitoring of different aspects of business efficiency. Instrument panels are created on the basis of portlets making it possible to quickly find necessary information, to analyze and compile reports on the business efficiency, and to adjust operations and notifications being the phase of business efficiency management. IBM WebSphere Business Monitor enables users to view dashboards, to analyze how their processes are working, to track individual items, and to identify bottlenecks (Bieberstein, Laird, Jones, & Mitra, 2008).

WebSphere Business Integration

In addition, WebSphere offers Tops BI services on introduction of integration systems and business process management based on IBM WebSphere Business Integration. The WebSphere Business Integration family addresses the issue of connectivity management, isolating applications from concern with network protocols and platform dependencies (Yusuf, 2004). The product can conduct complex survey including business processes and IT-infrastructure analysis; it is able to work out the architecture of integration solution; it can adjust and adapt monitoring means for business processes, informational security, and administration. The WebSphere Business Integration Family also offers a suite of adapters that bridge between popular off-the-shelf business applications, such as SAP or PeopleSoft, and WebSphere MQ (Yusuf, 2004).

WebSphere Business Modeler

Finally, IBM WebSphere Business Modeler allows modeling, projecting, and analyzing business processes, integrate new and improved processes, and determine organizational elements, resources, and business objects of the company which are necessary for realization of these processes. The program is aimed at achieving correspondence between object life cycles and models of business process. Currently supported features include: object life cycle conformance and coverage checking, semi-automatic resolution of selected compliance violations, extraction of object life cycles from a process model, and generation of a process model from several object life cycles (Alonso, Dadam,& Rosemann, 2007).

WebSphere Business Modeler allows creating the model of processes which is a graphic representation of the business processes existing in the company; resources model which sets different types of resources and their copies used in other models (for instance, the existing corporate informational systems may be used as resources); informational model which is a representation of data structures used in business processes; organizational model which defines the structure of the enterprise, organizational units and resources connected with them; analytical model which sets basic metrics and characteristics of business processes. The creation of these models corresponds to the specification of Business Process Management Notation which is aimed at forming simple and intelligible description of business processes clear to all types of workers starting from business analysts and ending up with technical experts.

Recommendations

Healthcare industry has never employed IT technologies for the improvement of the services it delivers this is why these days the gap between healthcare services and technology is immense. The best way out of this situation is using SOA model which would help to create information exchange system necessary for healthcare industry. The implementation of SOA will advance medicine and improve the quality of healthcare services. SOA plays a significant role in the future of medicine since it allows creating community sites controlled by experts, and different medical blocks, which can facilitate the work of medical librarians, students, scientists, and physicians. Assets and requirements which SOA model comprises will make it possible for other healthcare projects to reuse them. This is beneficial for the healthcare industry where sharing of information is extremely important. Web services are likely not only to change the existing ways of practicing medicine and to improve the delivery of healthcare services, but to reduce costs by means of utilization of IT resources.

References

Alonso, G., Dadam, P., & Rosemann, M. (2007). Business Process Management: 5th International Conference, BPM, Brisbane, Australia, Proceedings. Springer.

Bieberstein, N., Laird, R.G., Jones, K., & Mitra, T. Executing SOA: A Practical Guide For The Service-Oriented Architect. Addison-Wesley.

Francis, T., High, R., Herness, E., Knutson, J., Rochat, K., & Vignola, C. (2002). Professional IBM WebSphere 5.0 Application Server. John Wiley and Sons.

Hansmann, U. (2005). Pervasive Computing: The Mobile World. Springer.

Spencer, D. (2004). IBM Software for E-business on Demand: Business Transformation and the on Demand Software Infrastructure. Maximum Press.

Yusuf, K. (2004). Enterprise Messaging Using JMS and IBM WebSphere. Prentice Hall PTR.

Posted in IBM

IBM SPSS Software Analysis

Software Functions

IBM Statistical Package for Social Sciences (SPSS) is a simple and incorporated computer program used to analyze and interpret data. It is used by researchers, analysts, and business people because it has analytical and statistical abilities. The software has various features such as planning, reporting, data preparation, analyzing, and implementation. In addition, SPSS provides a supple use of options for standalone business applications. It is important to note that IBM has created different versions namely SPSS 9.0, 10.0, 11.0, 12.0, 13.0, 14.0, 14.0 and 16.0. Field (2000) points out that SPSS software is compatible with other programs like Excel 2007, making it very easy to convert data in Excel to an SPSS data file. It is also easy to save SPSS data files to Excel spreadsheets directly. It is also possible to open an Access database using advanced SPSS programs. It is also possible to customize the variable attribute that needs to be displayed in SPSS files.

Instructions for Use

IBM SPSS is one of the statistical packages used for analyzing and interpreting data. SPSS is, by far, the most common software used by psychologists. The help section is very important for anyone new to SPSS. The help section of SPSS has basic data analysis menus and dialog boxes which are very easy to use without actually learning SPSS. The dialog and menu boxes are useful because they provide options for every step taken during data analysis. The help section provides interpretation and questions that help psychologists have a better understanding and interpret results. Therefore, SPSS helps in estimating the variance and mean of the underlying population.

In the help section, there are directions on how to use the three main windows and the menu bar at the top; it allows the user to view data, see the output and be able to see any other programming item which has been used. Therefore, it is important to note that psychology is a science and just like any other discipline, it requires extensive research. Psychologist gets their information through systematic observation on a particular subject matter (mostly human beings). Habing (2003) argues that human beings are very difficult to analyze because they vary over a given time. The work of the psychologist is difficult and for them to determine how human beings react to a particular situation, then psychologist needs to use statistical software to predict what trends are present in the data they have on an individual or a population. Thus, psychologists are likely to use IBM SPSS in psychological research.

Validity has two distinct sections namely the research section and the degree to which a test measures some data. Thus, validity indicates the extent to which a study supports the anticipated conclusion drawn from the final output. Validity is further classified into four; external validity, construct validity, statistical validity, and internal validity. Internal validity is considered as an inductive approximation of the extent to which conclusions on causes of associations have high chances to be true, particularly the view of the measures used, the entire research design, and the research setting. An efficient experimental procedure in which the outcome of an independent variable on a dependent variable is monitored under a highly controlled condition, normally allows a high degree of internal validity than any single case design. Therefore, internal validity supports the conclusion that the casual variable influences the variables in the study (Field, 2000).

References

Field, A. (2000). Discovering Statistics using SPSS for Windows. London  Thousand Oaks New Delhi: Sage publications.

Habing, B. (2003). Web.

Posted in IBM

Analysis of System of Files Provided by IBM System I

The New File System

The file system is one of the most visible parts of the operating system. Almost all operating systems let users specify named objects, called files. Files hold programs, data, or anything else the user desires. The operating system also provides various APIs to create, read, write, destroy, and manage files. Moreover, most operating systems have their own unique and distinct file types. For example, UNIX has regular directories, files, and special block and character stream files. Regular files hold user data. Directories are essentially used to keep track of files and consist of information needed to supply files with symbolic names. Also, block and character stream files are used for modeling disk and terminal-like devices. Moreover, the very first application enablers added was the ability to store files from other operating system environments, like UNIX. At V3R1, the integrated file system (IFS) was introduced which not only created a consistent structure for all the existing file systems seen in the AS/400, but also created a new file system structure altogether needed for other operating systems. With passage of time, new file systems have been added. The IFS supported 10 file systems and three server types, by the time the iSeries was announced.

Features and Advantages

The IFS is said to be a part of OS/400, and provides a consistent structure for applications and users that were previously part of the AS/400. In addition, it provides support for the stream I/O files utilized by the PC and UNIX operating systems. Stream files, which contain long continuous strings of data, have become increasingly significant for storing images, audio, and video files (Soltis, 2001). Moreover, the IFS provides a common view of stream files that are stored either locally on the iSeries or a remote server. To manage all the files, a new hierarchical directory structure was created. This hierarchical directory permits access to objects by specifying the path to the objects, through the directories, in a way similar to the one seen in PC and UNIX file systems. This directory and file structure allows information such as UNIX stream files, record-oriented database files, and file serving to be handled through separate file systems or a common interface, depending on the applications needs. The main advantage and unique capability of the IFS is the ability to support separate file systems through a common, consistent interface. In the original S/38, it was easy to search for an object in the database by simply looking up the name in a library. A library supplied a means by which objects could be organized into distinct groups, and allowed objects to be found by a name. The similar library structure was incorporated into OS/400 (Soltis, 2001).

Libraries: Libraries are said to be OS/400 objects which are used for finding other objects in the database. Moreover, the library is arranged in a single-level hierarchical manner, unlike the directory structure found in UNIX and PC operating systems which show a multi-level hierarchy. Finding an OS/400 object needs the object names and the library along with the object type to uniquely identify the object. Exceptionally, a library cannot reference other libraries, which will, conversely, violate the single-level hierarchy of Library/Object. However, there is a unique and special library called QSYS that is used to reference other libraries (Soltis, 2001).

Shared Folders: Shared folders were brought into OS/400 primarily to support Office/400 functions. The S/36 was an efficient office system, and most of the concepts related to folders originated from this system. An OS/400 folders object was included to support these Office functions. In other words, the integrated Office support provided a filing system for all Office objects that contained data for an Office product. Documents, emails, programs, and file were amongst the conventional items that were present in this file system. Furthermore, document library services allow users to treat the filing system as an electronic filing cabinet furnished with document library objects in folders. Folder management services allow users to organize the objects in these folders. Additionally, folders may contain other folders and may be searched interactively. Therefore, along with the traditional Office items as stated above, shared folders contain stream, files for graphs, images, spreadsheets, PC programs, and PC files. Moreover, IBM replaced PC support with a product called Client Access that provided a platform for distributed client-server computing (Soltis, 2001). Initially, the primary challenge had been to determine how to construct a single file system that could integrate all file system types, since the separate file system was not designed to be compatible to each other. As previously discussed, the single-level-hierarchy OS/400 library structure is a subset of the one used in PC operating system with different names but similar structure. The PC operating systems contain files instead of objects, and a library is called a directory in which files exist. Unlike the OS/400 library structure, directories which exist within directories on the PC are usually called sub-directories. This arrangement, thus, creates a multi-level hierarchy for PC file naming, as opposed to the single-level structure employed by the OS/400 library. A PC file name takes the form DIR1DIR2&..DIRnFILENAME. This is a superset of the OS/400 library naming structure, except for the backward slashes (). Likewise, a UNIX file structure is a superset of the OS/400 library structure. Moreover, a UNIX file system allows multiple paths to the same object. The solution for combining all these file systems was to use a single root from a UNIX-like file system and to place all others under this root. A path name then takes the form DIR1/DIR2/&DIRn/FILENAME. The lengths of the file and directory names in IFS are increased to match the UNIX-based open system standards, like Posix and XPG. In OS/400, the names in the IFS directories are stored in a format known as Unicode, which is an international standard supporting multiple languages, which includes the double-byte character sets used by many nations (Soltis, 2001). The data format must be compatible with the application that requests it, because the IFS allows accessing the data, or the data must be converted to a compatible format. OS/400 manages the conversion between data formats such as Extended Binary Coded Decimal Interchange Code (EBCDIC) used for native iSeries applications, and the American Standard Code for Information Interchange (ASCII) in the PC world. All the files and file systems in the IFS are treated like OS/400 objects. Moreover, file system supports the same set of operations, described for each file in a standard operations table. Each file system possesses its own set of logical structures and rules to interact with information in storage, and these structures and rules differ from one file system to another. By employing a table approach, new file systems can be introduced to the IFS without requiring changes to existing file systems or the operations corresponding to those file systems. Furthermore, the IFS considers the original library support and original folders support as separate and distinct systems; along with the other file support that has varying capabilities. With this structure, new file systems may be integrated into the IFS as and when required. The virtual file system architecture provides a common interface to every file system in the IFS. VFS is an object-oriented interface which provides objects called as nodes which are abstract objects that represent the real objects stored in the file system. The nodes allow a collection of abstract operations to be defined that can be performed on real objects (Soltis, 2001). The UNIX-, Windows-, and OS/400-based servers allow the iSeries to act as both client and server when sharing data over a network. Additionally, the IFS supports the xSeries servers, either remotely attached to the iSeries or integrated into it, and Linux server running in an iSeries partition.

NFS Server: The NFS file system and NFS server let the iSeries act as both client and server by allowing remote file systems or directories to be mounted locally on a PC, a UNIX workstation, or the iSeries. The NFS file system allows users to access data and objects stored on a remote NFS server. Through the NFS server, the iSeries can export a network file system. This server also lets data to be exported from any of the following file systems: Root (/), QSYS.LIB, QOpenSys, QOPT, and UDFS. Communications between client and server are, therefore, accomplished through the use of Remote Procedure Calls (RPCs) (Soltis, 2001).

OS/400 NetServer: OS/400 NetServer allows Windows PCs access IFS directories and OS/400 output queues, with TCP/IP configured on both the iSeries and PC. The PC users access the shared information via Windows Network Neighborhood. The PC user who is connected to the iSeries views the IFS as a disk drive containing directories and objects, and can operate with files in the IFS by utilizing either the file-sharing clients built into Windows and NetWare or Client Access Express for Windows. Major changes have been made to the Client Access support of the iSeries. Previously, the Client Access family of products used proprietary support for printers and network drives. These products are now replaced with two new products that do not require the proprietary support, i.e. iSeries Client Access Express for Windows and iSeries Access for Web. The iSeries Express client uses in-built features of the Windows OS to communicate with OS/400 NetServer for network file and print access. By using the Express client, users can access files using Client Access SPIs through the File System in Operations Navigator (OpsNav). Directories may be created, removed, and renamed when working with the IFS under File System in OpsNav. Moreover, drag-and-drop functions are also available in the Express Client under Integrated File System with the Client Access SPIs. This is particularly important to take advantage of Secure Socket Layer (SSL). The SPIs support SSL while OS/400 NetServer does not (Soltis, 2001).

IFS objects can be accessed using either OpsNav or CL commands, the command language for OS/400 that was carried forward from S/38. Moreover, all the APIs used in the IFS are thread safe when they are directed at an object in a thread-safe file system. QOpenSys, QSYS.LIB, QOPT, QNTC, and user-defined file systems are all thread safe. Also, DataLinks extend the types of data stored in a database file. DataLink columns in the database are used for holding references to non-database files that are stored in the local IFS, remote IFS, or in the file system of any attached UNIX or Windows server that has IBMs DataLink Manager installed. A DataLink is used to specify the objects file location rather than storing the data object itself in the database columns. Furthermore, DataLink support lets the user assign directories in the Root to hold data link objects. Once a directory is set as a DataLink directory, all objects in that directory are accessed via the DataLink File Manager, i.e. DLFM. Therefore, the IFS is a fundamental application enabler for the iSeries. It enhances the existing data management capabilities of OS/400 and extends these capabilities to support the emerging new application environments (Soltis, 2001).

The New I/O

The I/O subsystem diffuses almost every part of a computer system. Though it is considered to be only one-third of the processor, memory, and I/O complex, I/O is considerably larger than either of the other two. I/O configuration requires more hardware and more lines of operating system code, as compared to anything else in the system. Briefly, the I/O sub-system constitutes of the group of components, both hardware and software, responsible for processing input and delivering output to various types of devices attached to the system. Whenever a user requires any system resource, such as reading from or writing to a file, requesting instructions of a program to be executed, requires another system object, creating or destroying an object, and that resource has not already been brought into memory, the computer goes through the I/O sub-system to retrieve or store, create or destroy the resource. Although I/O impacts every level in a computer system, it is usually relegated to second-class citizenship (Soltis, 2001). Along with the memory subsystem, the I/O subsystem finds out the response time and throughput for most computers. With the approaching time, when computers from low-end PCs to the fastest supercomputers will use the same microprocessor building blocks, I/O capabilities may be the only feature that distinguishes one computer system from another. Therefore, it can be debated that, as many are now starting to do, I/O is the most important component in the system. The I/O subsystem residing in the iSeries has undergone major transitions in the past few years, so it is appropriate to examine those changes to see how they will affect IBMs future system designs. The iSeries uses a hierarchy of components for the attachment of I/O, which includes I/O buses, I/O hubs, I/O bridges, I/O processors (IOPs), I/O adapters (IOAs), and lastly, I/O devices and network connections. The history of I/O hubs subsystem designs in the AS/400 and S/38 is partially technical and partly political. For both systems, the I/O designs were heavily influenced by the factors outside the Rochester development lab (Soltis, 2001).

Pre-history: The S/38

The S/38 employed an I/O channel that was similar to an IBM System/370 (S/370) channel, in many ways. A channel can be thought of as a specialized computer built alongside the main processor that is designed to offload the I/O processing from the main processor. A channel has its own instruction set designed especially to interact with the bus-attached I/O adapter and to transfer data between the memory and I/O devices. The main processor passes the channel hardware to the channel programs which then executes simultaneously with other executing programs in the main processor. When a channel finishes with its program, it interrupts the main processor to get more work. The main reason behind S/38 using a channel was because of the people involved. Many new people had to be brought into Rochester to design and build the S/38 (Soltis, 2001). Most of them came from other parts of IBM, and since the designers of the I/O hardware had previously operated S/370 channels, they decided to use a variant of a design they were familiar with. However, unlike the S/370, which supported multiple channels, the S/38 had only single channel. The second channel was designed but never implemented. The S/38 channel could be described as a block-multiplexer channel operating in a fixed burst mode (Soltis, 2001). For the very first AS/400s, the new I/O structure was developed. Unlike a channel, where the I/O intelligence is placed alongside the main processor or is even part of it, the new structure distributed the intelligence throughout some specialized IOPs designed to perform I/O operations. The concept of using separate processors for I/O had initially been adopted by System/36 (S/36), so the concepts were well known to Rochester developers. A fundamental part of this new I/O structure was the System Products Division (SPD) bus. During the 1980s, IBM corporate management was pertained to the fact that IBM had too many midrange computers all competing for the same customer market. To solve this problem, management moved the responsibility for five distinct midrange systems into one development division, i.e. the SPD. Two of those systems, the S/34 and the S/38, were products from Fortress Rochester. Soon, a new project with the code name Fort Knox was started in SPD, to converge all five systems into one system. The various parts of the new system would be built in different locations that were part of SPD (Soltis, 2001).

From 1988 through 1997, the system I/O bus used in all AS/400s was the SPD bus. Moreover, specialized hardware generated the signals for one or more SPD buses and handled the transitions from the high-speed system memory bus to these SPD system I/O buses. The SPD bus has itself undergone some various updates over the years, but otherwise it is a packet-oriented bus with enhancements that allow efficient streaming of data. Packet-oriented data transmit messages in small blocks, known as packets. Streaming entails continuous blocks of data can be sent, which permits higher data rates to be achieved on the bus. In a streaming mode, the SPD bus can support data rates of 25 to 36 megabytes per second (MB/sec), depending on direction and contention on the bus. Furthermore, the SPD bus supported a parallel copper implementation that could be used inside a single enclosure. This parallel copper bus had 32 lines to transfer data, 8 lines to transfer command and status information, 8 lines for origin-destination identification, and several other control lines. When the AS/400 system was brought in, in 1988, the parallel copper SPD bus was the only one available. Afterwards, a serial optical version of the SPD bus was introduced, which could go outside the single enclosure. Moreover, this serial optical bus was able to cover distances up to 500 meters and even two kilometers in certain environments (Soltis, 2001). Connected to the SPD bus were the IOAs and IOPs that worked together to perform several functions. Additionally, they moved data between the systems main memory and the I/O devices as well as network connections. They transposed data from one form to another and provided a buffer for high-speed devices like disks for data required by the processors. Earlier, the implementation of these I/O functions utilized dedicated-function IOPs. However, the exception was the multifunction I/O processor where multiple functions could be executed on one IOP. Since most IOPs were dedicated to a single function, the various adapter cards which contained the IOPs and IOAs tended to share very less. As a result of this, numerous different processors were used for the IOPs mostly from the Motorola 68000 family; and many unique AS/400 interfaces to devices were established/ created. A good example of these unique AS/400 interfaces are SCSI, SCSI-2, Workstation, LAN, FDDI, ATM, WAN, Token-Ring, ISDN, Frame Relay. In short, standardization was not a higher priority (Soltis, 2001). Rather than totally altering and immediately phasing out the SPD bus and all SPD I/O adapters, the changes were staged over four releases. This technique would allow customers to migrate to the new AS/400e models and move many of the SPD I/O adapter cards to the new system. Furthermore, a transition I/O structure was created to support both SPD and PCI adapter cards for the next few releases. Soon the necessity arose to increase the bus bandwidths beyond the SPD buss capability since customers demanded for tremendous increases in processor, I/O device, and the interconnection network speeds, and to prepare for the likes of 1 to 2 gigabit/sec Fiber Channel buses and 1 to 2 gigabit/sec. The result of this effort is the RIO bus, which consists of a pair of byte-wide unidirectional point-to-point buses, each one containing 8 data lines, one clock line, and one flag line. Another major change during this transition period focused on the IOPs. Starting with V4R1, all new IOPs were being designed to have a more uniform structure. Apart from PowerPC processors, the IOPs use the 32-bit 603 and 740 family of PowerPC. Unlike most of the previous IOPs, these support the attachment of multiple IOAs (Soltis, 2001). Before V4R1, the internal buses of the IOPs were based on the Intel 960 bus, the Motorola 68000 bus, the IBM Micro Channel, or the proprietary magnetic media bus architecture. Since V4R1, all new IOPs now use only the PCI bus. Furthermore, the PCI bus architecture allows a shared bus to be used for interconnecting the I/O components in the server. When one device is using the bus at a given instance of time, no other device can use the same bus at that time. Though multiple PCI buses may be used in a server, however, the shared bus architecture restricts the total I/O bandwidth that can be achieved. With InfiniBand, the shared bus is replaced by high-bandwidth, high-speed, point-to-point connections between devices. Additionally, InfiniBand is known as switched fabric architecture because it uses switches for making point-to-point connections, similar to the way switches are used in the iSeries memory subsystem. Moreover, the InfiniBand architecture is also called channel architecture because of its similarity in the concept to the S/370 channel that inspired the I/O structure in the S/38. Also, because InfiniBand is a channel, it has more intelligence built into it than a PCI bus does.

iSeries and the Internet

IBM has placed iSeries as the best hardware platform for the Internet-based applications by offering software development and management tools like the WebSphere, as well as creating a multifaceted iSeries environment called the Integrated File System (IFS). Server software is written to run in a UNIX or Windows operating system environment. In other words, servers expect an environment wherein there are directories, subdirectories, and stream files, and not libraries, objects, and members. But in reality, everything in the iSeries native environment may be classified as a library, object, or member (Janson, 2007). In other words, there are many alternative system environments on the iSeries. The iSeries supports and affirms many universes, of which the native environment is only one. The IFS encompasses all storage on an iSeries and is comprised of several directories of which the Root directory (/) is the parent or primary directory. Everything stored on the iSeries is placed in the Root directory. Directories are similar to libraries, i.e. they are used to organize files. However, unlike libraries, directories can have subdirectories. The Root already contains a series of predefined subdirectories that support other common computer system environments. For example, QOpenSys subdirectory operates like UNIX system environment, and QSYS subdirectory encompasses the iSeries native environment. Moreover, each of the separate environments has its own rules regarding storing information and executing programs (Janson, 2007). Data or a program stored in the Root or QOpenSys is stored as a stream file; there are many variations amongst these predefined directories. For example, Root is not case sensitive and QOpenSys is. In addition to this, the IFS also supports a current directory. The current directory depicts a current library in the native environment. Whenever a user does not specify a directory, the system uses the current directory, which may be set to the Root or created by the user.

Working in the IFS: Studios Remote System Explorer (RSE) provides a tree diagram interface to all the directories and files of IFS. The iSeries native environment supplies IFS commands that allow users to work with any object irrespective of which directory is it stored in. These commands can be implemented from the command line on any menu or screen just like CL commands. Furthermore, prompting will also work on these commands. The workhorse of these commands is WRKLNK, i.e. Work with Object Links. WRKLNK displays a list of object names, i.e. links that are contained within a specified subdirectory. Thus, when issuing the WRKLNK command, the user is required to identify the path to be displayed. For example, to display all the objects within YOURLIBXX, the full IFS path of the Root, QSYS, and YOURLIBXX should be specified. The syntax for the WRKLNK command would be: WRKLNK OBJ (/QSYS.LIB/YOURLIBXX.LIB/*) (Janson, 2007)

In the above command the QSYS and YOURLIBXX need types of LIB, and the file-naming convention of separating the object name and type with a period is required. Additionally, the path is enclosed in single quotes, and the Root is indicated by the first forward slash. Also, a forward slash is used to separate the subdirectory from its parent directory; an asterisk must also be specified. The asterisk entails all the objects within YOULIBXX are displayed. Subsetted lists may be displayed by using asterisk(s) to replace the object links names, types, or portions of the name and types. For example, specifying *.file would display only file objects, while specifying /A would display object links in the Root directory that have names beginning with A only. The Work with Object Links display screen functions exactly like the Work with Objects Using PDM screen. Functions are carried out by specifying options to the left of the object links or pressing a function key (Janson, 2007). Server software on an iSeries is usually installed in the Root or QOpenSys directory. Each server requires Web pages to be stored in a certain directory. In this case, the server requires all Web pages to be stored in the following directory path:

  • QIBM/ProdData/HTPP/Public/HTTPSVR/HTML/
  • The IP address of the iSeries followed by that path and the file name can be, for instance, given as: 123.456.789.3/QIBM//ProdData/HTPP/Public/HTTPSVR/HTML/coolpage.html

As far as typing mistakes made by the user before entering the address, is concerned, fortunately, nicknames for these paths can be established. These nicknames are configured or defined with the WRKHTTPCG command, i.e. Work with HTTP Configuration. Issuing this command displays a screen with all sorted of configuration information. Configuration information contains both required information and the preferences. When the HTTP server starts, the configuration information is read and the HTTP server performs all the necessary set-up tasks that are defined in the configuration information (Janson, 2007). In the above example, the server is required to set-up a path nickname. The Pass command assigns a nickname to an existing directory path, and when added to the configuration information, the following Pass command will establish /sample as nickname for the servers Web page path: Pass/sample/* /QIBM//ProdData/HTPP/Public/HTTPSVR/HTML/*

Many commands may be too long to fit on the Work with HTTP Configuration screen. To display and edit a line, a 2 is typed in the option area to the left of the line and ENTER is pressed. The Change HTTP Configuration Entry screen is displayed with the full command. This Pass command creates a nickname of / for the Web page stored the Welcome.html file (Janson, 2007). The Welcome.html file comes with HTTP server software and contains a sample Web page. The Pass command entails the Welcome.html file will be displayed when the URL or IP address is specified. In short, this Pass command establishes Welcome.html as the default home page for the server. Furthermore, Pass command can be added to make / sample as a nickname for the default Web page path. To add a line, a 3 is typed in the option files on the blank line on the Work with HTTP Configuration screen, and a sequence number is entered where the new line is to be added in the configuration data. Restarting the HTTP server will establish the user-friendly path name of /sample for the path /QIBM/ProdData/HTPP/Public/HTTPSVR/HTML

There are some ways to move a Webpage to the correct directory on the iSeries. For example, SEU could be used to enter the HTML source code into a member, and then the CPYTOSTMF command could be used to move pages to the correct IFS file. However, Studio is infinitely easier. Publishing a page is called Exporting in Studio. For the said example, the StudioPage.html is to be exported to the iSeries directory path of /QIBM/ProdData/HTPP/Public/HTTPSVR/HTML. To do this, first the file StudioPage.html is selected in the Navigator tree. FILE and then EXPORT is clicked on the screen to display the Export window. At the Export window, the Remote File System option is selected and the Next button is clicked. At the Export RFS window, the directory path may be typed in the Folder textbox. The path can also be specified by using the Browse for Folder window, which provides a navigation tree for all external systems identified by WebSphere. In this case, the user needs to expand on the iSeries entry and drill down the path of /QIBM/ProdData/HTPP/Public/HTTPSVR/HTML (Janson, 2007).

Types of files in the IBM System I

Keyed Files: Storage devices physically access the storage media either sequentially or directly. Direct access entails that the storage media or media can go directly to any storage location and read the information stored there. Sequential access means that the storage device should go through all storage location that physically precedes the storage location being sought (Janson, 2007). Tape devices and computer disks provide the same type of access to data records. When accessing a particular data record, however, there needs to be a way to identify that record. That is, each record is allocated a number based on its relative position within the file, called the relative record number. This is the number DFU that has been used to identify records. Relative record numbers can work wonders for CDs with small number of songs, for instance, but with data files containing thousands of records, it is difficult to keep track of each records record number. The most common method of identifying records is with a primary or unique key. Earlier, the primary key controlled the physical location of data records. Also, there could be many unique keys but only one primary key. The iSeries allows one to define a single primary key for a physical file. However, the primary key does not control the location of data records. Files may also have secondary keys. These fields do not essentially have a unique value for each record. For example, the student file may have the field eye color as a secondary key. Finding the file based on a particular eye color will not result in finding a single record, i.e. the search would result in multiple records. Thus, non-unique secondary keys cannot be used for identifying a particular record (Janson, 2007). Sequential files store records in a sequential manner, one after another in key order. Access to a sequential file is limited to the sequential access method, i.e. each record is processed in the order in which it is placed in the file. Such type of files can support access to the file in only one key field order. This is the main reason why sequential organization is never used. Furthermore, keyed sequential files allow each record to be uniquely identified. However, retrieving a specific record means that each record is read and the key fields value is checked. In an indexed sequential file, each record contains a unique value for the key, records are grouped into blocks, and each block contains records for specific range of key values. Within the block, the records are stored in a sequential manner in key order; blocks do not have to be in key order. The main reason behind this organization is the concept of index, which is a file that contains an index record for each block of data records. Additionally, the index record contains two major pieces of information: the highest key value contained within the block, and the starting storage location/ address of the block. Moreover, the index records are stored in key order (Janson, 2007). Also, the index contains only the key fields and the storage location. While accessing an indexed sequential file for a particular record, the first step is to sequentially search the index. Each index records key value is checked against the key value that is being searched for. If the index value is greater than the value being searched for, the block containing the data record is found. The system then goes directly to that block as indicated by the storage location in the index record, and reads each data record sequentially in the block, looking for the accurate key value. Space utilization in the system memory can be improved by using a more complicated calculation called a hashing routine. There are several types of hashing routines but what characterizes them all is that they generate a smaller range of addresses. A typical example is the division/remainder method wherein the number of storage location that will be in the file will be determined. Then, the largest prime number is chosen that is less than the number of records to be stored. This method is based on the fact that the number of possible remainders generated by division is equal to the divisor (Janson, 2007). Each operating system or file management system implements the file organizations and access procedures, in a unique manner. Also, they generally have their naming conventions. For instance, the iSeries supports access to files in the form of arrival sequence order. That is, a user or program can access records in the order in which they arrive into a file. As far as access is concerned, arrival sequence is similar to sequential access. However, the iSeries does not store records in arrival order. It employs sophisticated algorithms that calculate the most efficient utilization of available space and resources and may even scatter records across physical storage devices. The iSeries provides arrival sequence access through an access path. This access path can be thought of as an index. For example, an arrival sequence access path that supports only sequential access to a file could simply consist of a list of storage locations. To access the file, the access path would be searched sequentially and the records read directly (Janson, 2007).

Logical Files: A Database Management System will usually provide three ways of viewing a database, i.e. physical view, global view, and the ability of building multiple user views of the data. Because of the iSeries single level approach to storage, the physical view of data is mostly hidden from the iSeries. One need not know the disk, track, or sector on which data is stored. Additionally, any specific location addresses or indices are used internally by the DBMS and are not readily available to the user. However, the iSeries does not offer a limited physical view of the data, with the DSPPFM command, along with some capability to manipulate the physical organization of the data, with the RGZPFM command. DDS does, however, provide a global view of all the data as files as well as it allows construction of individual views. This is obtained through both logical and physical files. Physical files contain the definition of individual fields within the file and include an access path to the data (a key) and the data itself (Janson, 2007). These definition pieces taken as a whole provide a global view of all data on the iSeries. Users often want to see data in a form other than how it is organized. The iSeries employs another type of object called a logical file that provides an alternative access to data in physical files. Like physical files, logical files contain file definitions. However, logical files contain only field definitions and an access path, and no data is stored in a logical file. Since logical files do not contain any data, they must reference data already defined in a physical file. But a logical file can reference field from many physical files.

Through logical files, unique combinations of data seem to exist without duplication of any data (Janson, 2007). Logical files are defined with DDS and are created by compiling the member of the source physical file that contains the logical files DDS definition, similar to physical files. To create a logical file, first a member with a type of LF is created, and then the logical files DDS source definition is entered. And finally, the DDS is compiled to create the logical file. Since logical files do not contain any data, they simply point to data contained in physical files (Janson, 2007). There are several benefits offered by logical files. Firstly, logical files can be used to easily generate customer reports. This relieves the programmer of having to write and maintain large reporting system. The programmer can greatly reduce the number of report programs through logical files. The specialized access provided by logical files to physical files also simplifies program input and output logic. Because logical files are defined externally, they can also be used by many programs. Without logical files, every program would have to contain the duplicate program code to access the separate physical files. Lastly, logical files do not create any duplicate data (Janson, 2007).

Reference List

Janson, R. (2007). Introduction to the iSeries and Websphere Studio Client. USA: Shroff/ Janson Publishers.

Soltis, F.(2001). The Inside Story of the IBM iSeries. Colorado, USA: 29th Street Press.

Posted in IBM

Discovering Statistics Using IBM SPSS Statistics

Figure 1 below shows that the average rainfall and runoff fluctuates from one year to another with a similar variation trend.

Figure 1. The Trend of Annual Average Rainfall and Runoff from 1996-2005.

The scatter plot shows that the annual average rainfall and runoff have positive relationships (Figure 2). The regression line indicates that the annual average rainfall accounts for 90.36% of the annual average runoff variation. Field (2017) explains that coefficient of determination (R-square) indicates the degree to which a predictor influences a response variable.

Figure 2. Scatter Plot Showing Relationships Between Annual Average Runoff and Rainfall.

Figure 3 depicts the seasonality in monthly averages of rainfall and runoff. November, December, and February have the lowest rainfall and runoff, while June, July, August, September, and October have the peak levels of rainfall and runoff. Comparatively, March, April, and May have moderate levels of rainfall and runoff.

Figure 3. Monthly Variation in Average Rainfall and Runoff Showing Seasonality.

Figure 4 reveals that monthly average rainfall and runoff have a positive relationship. The monthly average rainfall explains 97.23% of the variation in the monthly average runoff.

Figure 4. Scatter Plot Showing Relationship Between the Monthly Average Runoff and Rainfall.

Rainfall and runoff have a positive correlation, as shown in the scatter plot below (Figure 5). Rainfall accounts for 93.68% of the variation in the runoff.

Figure 5. Scatter Plot Showing the Relationship Between Rainfall and Runoff.

Reference

Field, A. P. (2017). Discovering statistics using IBM SPSS statistics. SAGE Publications.

Posted in IBM

Financial Analysis of Intel and IBM

Profitability ratios and working capital management

A company should earn profits to survive and grow over a long period of time. Profits are the ultimate output of a firm, and it will have no future if it fails to make sufficient profits. Besides management, creditors and investors are also interested in the profitability of the company. Several profitability ratios will be discussed in relation to the operational efficiency of Intel Corporation and IBM (International Business Machines).

2009 2008 2007
Intel IBM Intel IBM Intel IBM
Gross profit margin (%) 55.69 45.72 55.46 44.06 51.92 42.24
Net profit margin (%) 12.44 14.02 14.08 11.9 18.2 10.55
Operating ratio (%) 83.74 81.06 76.18 83.87 78.57 85.33

The first profitability ratio in relation to sales is the gross profit margin (or simply the gross margin), which is calculated by dividing the gross profit by sales. The gross margin reflects the efficiency with which management produces each unit of its products. The ratio also indicates the average spread between the cost of goods sold and the sales revenues. As seen from the above table, Intel Corp. has a higher gross margin than IBM. The higher gross margin ratio relative to the industry average, or in this case, Intels higher gross margin in comparison with IBM could imply that Intel is able to produce at a relatively cheap cost. The rising gross margins for both companies could indicate a combination of variations in sales prices and production costs, with the margin widening.

  • Gross profit margin:grossprofit/sales
  • Net profit margin:netearningsafter taxes/sales
  • Operating ratio: costof goodssold + operating expenses/sales

The net profit margin ratio is computed by dividing net earnings after taxes by the sales figure. The ratio indicates managements efficiency in manufacturing, administering, and selling products. The net profit margin generally measures a companys ability to turn each unit of dollar sales into net earnings. Both Intel and IBM have reasonably healthy net profit margins, meaning that the firms will be able to achieve satisfactory returns on owners equity. The high net profit margins will protect the firms in case of a harsh economic environment, and they indicate high earning power for the two firms.

A more detailed analysis reveals that Intels net profit margin has been declining, implying that the operating expenses relative to sales have been increasing. IBM, the firm with the higher net margin ratio, has a better capacity to withstand adverse economic conditions characterized by falling sales prices, rising costs of production, or declining demand in the market. Similarly, IBM may take better use of favorable economic conditions such as rising sales prices, falling costs of production, or increasing demand in the market for its products. Therefore, IBM will be able to accelerate its profits at a higher rate than Intel due to the formers higher profit margin.

The profitability of a firm can also be measured in relation to investment. (Sottini, 96) notes the return on assets (ROA) is determined by dividing net profit after taxes by the companys total assets. This ratio sounds conceptually unsound as it excludes interest charges from the net profit figure. The total assets have been financed by the pool of funds supplied by creditors and shareholders of the company. To know how well the pool of funds has been used, the return can be compared with the cost of using the cost of funds. Since net profit after taxes in the numerator excludes interest charges, resulting in an understatement of the earnings generated by the pool of funds, the interest charges should be included in the net profit after taxes.

Return on assets: net profitafter taxes + interest/totalassets

Intel (2009): 4.369m + 163m/53.095m = 0.08535 or 8.535%

IBM (2009): 13.425m + 402m/109.022m= 0.1268 or 12.68%

From the above, IBM has a higher return on assets (ROA), which means that the companys assets bring in more profits than those of Intel. The return on assets is a useful measure of the profitability of all financial resources invested in the firms assets and evaluates the use of total funds without regard to the sources of the funds. From the ROA figure, IBM has invested $7.88 in its assets for every $1 of profit while Intel has invested $11.71 in its assets to produce the same amount of profit.

The return on capital employed (ROCE) measures how well management has used the funds supplied by creditors and owners. The higher the ratio, the more efficient the firm is using funds entrusted to it. IBM has a higher ROCE than Intel, meaning that the company has relatively more operating efficiency.

Return on capital employed (ROCE) = earning before interest ‘taxes/totalassests-current liabilities

Intel (2009) ROCE: 5.867m/53.095m  7.591m = 0.1289 or 12.89%

IBM (2009) ROCE:18.540m/109.022m  36.002m = 0.2539 or 25.39%

The operating ratio helps in explaining changes in the net profit margin ratio. The operating ratio is calculated by proportioning all overhead costs; cost of sales, marketing expenditures, general and administrative expenditures against the firms incomes. The result indicates the percentage of sales that have been consumed by the cost of goods sold and operating expenses, while the remaining percentage (100 percent minus the operating ratio percent) shows the amount that has been left to cover interest, income taxes, dividends, and the companys needs to retain earnings for expansion strategies.

Working capital can also be used as a measure of liquidity working capital is attained by deducting current liabilities from the current assets. It is considered that, between two firms, the one having the larger amount of working capital has a greater ability to meet its current obligations. This is not necessarily the case since the measure of liquidity is the relationship, rather than the difference between current assets and current liabilities. Therefore, it is the current ratio or the quick ratio which are better indicators of the firms liquidity than the amount of working capital.

Chandra (63) notes the total assets turnover is measured by dividing total turnover (sales) by the total assets of the firm, indicating how efficiently the companys assets are being utilized in the generation of sales. The total assets turnover ratio is a significant ratio since it shows the firms ability to generate sales from all the financial resources committed to the firm. As the ratio increases, the more the revenue generated per dollar of total investments in assets.

Asset turnover ratio=total turnover/total assets

For Intel, the total asset turnover in 2008 and 2009 are given as:

2008:37.586m/50.472m = 0.74

2009:35.127m/53.095m = 0.66

For IBM, the total asset turnover ratios for the two years are given as:

2008: 58.892m/109.524m = 0.54

2009: 55.128m/109.022m = 0.51

From the above calculations, Intel has a higher ability to produce a large volume of sales from its assets, thereby allowing the firm to save on maintenance costs of idle or improperly used assets. The total assets turnover should be cautiously interpreted. In the denominator of the ratio, assets are the net of depreciation. Hence, older assets that are still in use with a low book value may create a misleading impression of a high asset turnover. More detailed analysis as to the composition of assets would be necessary in order to provide for a more comprehensive analysis.

Liquidity analysis

Liquidity ratios determine the capability of the firm to cover its current and upcoming financial commitments. Cash flow statements help illustrate the liquidity of a firm, but liquidity ratios, by establishing a relationship between cash and other current assets to current obligations, provide a quick measure of liquidity or solvency position of the firm. Efficient firms try to strike a proper balance in their cash positions by ensuring that they do not suffer from a lack of liquidity and that they are not highly liquid.

Current ratio=current assests/current liabilities

Intel current ratio:

  • 2008:19.871m/7.818m = 2.542
  • 2009:21.157m/7.591m = 2.787

IBM current ratio:

  • 2008:49.004m/42.435m = 1.1155
  • 2009:48.935m/36.002m = 1.359

The current ratio is calculated by dividing a firms current assets by its current liabilities (Sottini, 26). From their financial statements, Intels current ratios for 2009 and 2008 are 2.787 and 2.542 respectively while IBMs current ratios are 1.359 and 1.155 for years 2009 and 2008 respectively. Whereas a current ratio of 2-to-1 is considered satisfactory, a high ratio could indicate idle assets. Since the two firms are in the same industry, an analyst could conclude that Intel is more liquid than the two since it has a higher margin of safety, but IBM could be doing better than Intel since it is utilizing its assets at a better rate.

Further analysis through the quick ratio could indicate the quality of the assets of both firms. The quick ratio is calculated in the same manner as the current ratio, but only quick or liquid assets are considered in place of total current assets. One would have to deduct inventories, prepaid expenses, and deferred assets from the current assets figure since they cannot be easily converted into cash with year.

Gearing ratios and capital structure

Liquidity ratios, as discussed in the previous section, are calculated to indicate the current financial position of a company. To judge the long-term financial position of the firm, leverage or capital structure ratios are calculated (Shim and Siegel, 101). Capital structure ratios measure funds provided by creditors and owners of the company. The debt-equity ratio is the measure of the relative claims of creditors and owners against a firms assets. (Brigham and Erhardt, 39) notes the debt ratio is calculated by dividing a companys total debt by its total assets, whereby both current and noncurrent liabilities are used. The debt-equity ratio can be determined by dividing total long-term debt by the shareholders equity. Long term debt can be obtained by subtracting current liabilities from the companys total liabilities. (Chandra 89)

Debt-equity ratio=totallong termdebt/shareholders equity

Intel (2009) =2.049m+555m+1.003m+193m/41.704m = 0.091

IBM (2009) = 86.267m  36.002m/22.637m = 2.22

From their 2009 financial statements, Intel has a debt-equity ratio of 0.091 to 1, while IBM has a debt-equity ratio of 2.22 to 1. This implies that for each equity dollar invested, Intel has raised 0.091 dollars in long-term debt while IBM has raised 2.22 dollars, or that IBMs long-term debt is 222 percent of equity. IBMs high ratio indicates that the claims of creditors are greater than those of owners. A high ratio is unfavorable from the firms point of view as it introduces inflexibility in the companys operations due to increasing interference and pressures from creditors. A major advantage that IBM investors may experience is the high earnings per share (EPS) resulting from high leverage especially if the company has satisfactory profits. A low ratio, as is the case with Intel, can worry shareholders as the company is not using debt to its best advantage.

Investment ratios

Investment ratios are used to measure the profitableness of shareholders investments. One investment ratio is the earnings per share (EPS); it is measured by proportioning the total earnings after taxes minus preference dividends against the total outstanding shares. EPS calculations made over years indicate whether or not the firms earning power on a per-share basis has changed over that period (Frisdon and Alvarez, 61). Both companies have provided their multi-year earnings per share, as well as diluted EPS. Diluted EPS signifies a situation whereby outstanding shares increase as a result of certain events, such as a preference shareholder converting all his preference shares into common stock, employees of the company exercising their stock options (Maguire, 28).

Intel Corp. has $0.79 and 0.93 basic EPS for years 2009 and 2008 respectively, while if diluted, EPS stands at 0.77 and 0.92 for the two years. IBM has a basic EPS of $10.12 and 9.02 for years 2009 and 2008 respectively and diluted earnings per share of $10.01 and 8.89 for the two years respectively. Intels EPS has reduced in the period in focus, signifying a drop in profits as the company has not split or offered more shares in the market. IBM, on the other hand, has increased its EPS as a result of higher profits.

For the two companies, IBM has a higher variance between diluted EPS and basic EPS, meaning that Intel shareholders are more likely to get a share of the companys earnings than IBM investors. A high variance between the basic earnings per share and diluted earnings per share means that there is a higher risk of dilution of the outstanding shares, thus reducing the share of profits available for each shareholder.

The price-earnings ratio, or P/E ratio, is the reciprocal of the earnings yield or the earnings yield ratio. The P/E ratio can be therefore be calculated by dividing the market value per share figure of a company by its EPS (Sharpe et al., 524). The price-earnings ratio is widely used by security analysts to evaluate the firms performance as expected by investors. The P/E ratio indicates investors judgment or expectations about the companys overall performance.

Intel P/E 2008 2009
Stock price 12.9 13.87 19.4 24.56
EPS 0.93 0.79
IBM P/E 2008 2008
Stock price 91.65 10.16 122.39 12.09
EPS 9.02 10.12

Management is also interested in this market appraisal of their companys performance and would like to find out the causes of a decline in the price-earnings ratio. The surge in P/E ratios for both companies indicates that investors are upbeat about their performance, with Intel investors particularly optimistic about the companys future performance. This means that Intel investors have a higher expectation about the growth in the firms earnings since both companies lie within the same industry.

The net profits after taxes belong to a companys shareholders; although the income that they receive is the number of earnings distributed and paid in form of cash dividends. Therefore, there is a large class of present and potential investors are more interested in the dividend per share, rather than the earnings per share, such as pension and insurance firms. The dividend per share (DPS) is the earnings paid to individual equity holders for each share held.

Companies will usually provide the dividends per share figure in their financial statements (Chandra, 57). IBM has dividend per share of 1.9 in 2008, and 2.15 DPS in 2009. On the other hand, Intel has a lower DPS of 0.5475 and 0.56 in the years 2008 and 2009 respectively. The rise in dividends per share for IBM represents a 13.16 percent increase, while Intels corresponds to a 2.23 percent increase. Investors who are more focused on dividends would therefore opt for IBM shares as they promise higher dividends.

The dividend yield is the dividends per share divided by the market value per share and signifies the shareholders return about the market value per share. The information on the market value per share is generally not available from the financial statements and has to be collected from external sources such as the stock exchange (prices used were derived from the MSN money website). IBM has a dividend yield of 1.31 percent ($2.15/$164.82) while Intels dividend yield is 2.58 percent ($0.56/$21.69). Although IBM has a higher dividend, IBM has a better dividend yield because of its relatively cheap share price.

Chandra (45) notes the dividend cover measures the ability of a firm to pay dividends from its profits. The ratio is determined by dividing the net profit available to shareholders by the number of dividends paid out by the firm in the course of the year. The dividend cover can also be measured by dividing the earnings per share by the dividends per share. The higher the dividend cover, the greater a companys ability to sustain dividends in the event that profits decline. (Brigham and Ehrhardt 66).

Intel dividend cover 2008 2009
EPS 0.93 1.7 0.79 1.41
DPS 0.5475
IBM dividend cover 2008 2009
EPS 9.12 4.74 10.12 4.71
DPS 1.9 2.15

From the table above, it can be held that IBM has a healthy dividend cover, implying that it can distribute up to 4 times the amount of dividends from the companys profits. Intel has a lower dividend cover, meaning that shareholders have a higher risk of being impacted by a shortfall in profits. The high dividend cover could explain why IBM can increase dividends per share by 13.61 percent. The implication of this to investors is that they would be safer investing in IBM shares in order to protect their dividends. In case of a downturn in the economy, Intel may be forced to dig into its retained earnings if the company wants to maintain issuing dividends. Dividends may also contain a signaling effect; hence investors and analysts may translate the hike in IBM dividends means that the company expects better future prospects.

Problems of relying only on public financial information

Publicly-traded companies are required by state laws to publish annual and quarterly financial information in accordance with the established accounting principles. While this is the case, investors should be wary of relying solely on published financial statements. Although companies may comply with government regulations and release true and fair financial statements, managers may opt to disclose that information which will portray their company in a positive way. Managers may also take advantage of other legal methods that will inflate earnings, such as setting high depreciation rates for their assets in their books of accounting. Disposal of an asset could be included in revenues figures, thereby overstating revenues from operating activities.

References

Brigham, Eugene and Ehrhardt Michael. Financial management: theory and practice. New York: Cengage Learning, 2008. Print.

Chandra, Prasanna. Financial Management. 7th ed. New Delhi: Tata McGraw-Hill, 2008. Print.

Frisdon, Martin and Fernando Alvarez. Financial statement analysis: a practitioners guide. New Jersey: John Wiley and Sons, 2002. Print.

IBM (NYSE: IBM). Financial statements. NYSE. 2009. Web.

Intel Corp (NASDAQ: INTC). Financial statements. NASDAQ. 2009. Web.

Maguire, Marion. Financial Statement Analysis. New York: GRIN Verlag, 2007. Print. msn money. Investing. Moneycentral. 2011. Web.

Sharpe, William F., Alexander, Gordon J. and Jeffery V. Bailey. Investments. 6th ed. New Jersey: Prentice-Hall, 2008. Print.

Shim, Jae and Siegel Joel. Financial Management. New York: Barrons Educational Series, 2008. Print.

Sottini, Maxime. It Financial Management. Zaltbommel, Netherlands: Van Haren Publishing, 2009. Print.

Posted in IBM

IBM.com Website and Human-Computer Interaction

Overview of IBM.com

By reviewing the website, the visitors meet only the desired results. The strategy, intent, and purpose are very clear because they contain ordinary and advanced search engines, especially when one would like more and in depth information from the website. For instance, IBM has incorporated the use of cloud computing in its system, this is supported by the fact that: whether you work remotely, manage remote teams, or need one place to bring colleagues, partners, and vendors together, our offerings help you transform your business into a social business (IBM: Why IBM SmartCloud for Social Business 2012, p.1)

Discussion

On strategy, intent, and purpose, the primary action on each website is clear. For example, when examining a worksheet, the client wants to achieve specific tasks and goals when visiting and using the websites as the search engines. The website does not display the icon for identifying the well written and adequate graphical representations that a client may need to find out. However, the website allows the client to use keywords in finding such as software products.

This means that there is no direct way of finding specific software products (Raskin 2000, p.63). Secondly, the website allows online shopping and buying of software products. The website even indicates software products, which are on offer and the savings in terms of percentage that the buyer would be entitled to when he/he buys the products online. Therefore, the websites address the clients dilemma on whether he/she wants to buy the products online, request it from the library, or go a buy it in the bricks and mortar store because such options are available on the sites.

On design and functionality, there is the innovative use of text, graphics, and web-based tools making them more of a brochure than normal websites. For example, a normal website does have the required information on the home page. However, for the website given, the home page does not have all the information but contains a set of interactive web-based tools to make the person find the desired information (Sears & Jacko 2007, p. 49). Therefore, one might not get the required products on the home page.

The navigation of the website is clear. This is because the interface gives the visitor the chance to navigate the page and make necessary corrections before proceeding with the search. It is possible for the informed internet users if visitors arrive at an internal page via search engine results, they can immediately understand where they are in the site structure and how to navigate to a top-level page. For example, on amason.ca, once the user has typed the keyword and pressed search, the search engine gives various options related to the information that the person wanted. From here, the person can use the web-based tools to navigate the page so that he/she could get the right information. Also, the person can navigate the page backward or move forward when looking for a specific item (Raskin 2000, p.52).

Navigation is not persistent or consistent. Notably, it changes without reason depending on what the user wants to find out. For example, when merely looking for information related to particular products, navigation is consistent. However, when doing different functions such as online buying, navigation changes, and sometimes, visitors who are relatively new to using the website can become lost.

Visitors are not easily, or not able, to complete all three tasks because they differ depending on whatever the user is looking for. This hinders the persistence in the navigation of the website, meaning that each item would shape the nature of navigation that the user should apply. Links and buttons are clickable on the website. There are links with related information to that of the user. Those links are displayed on the current page and the person can click and find such information that may assist him in his search (Sears & Jacko 2007, p. 26).

In the website, audience-centric keyword phrases are used during navigation. This means that there is no particular taxonomy that is applied, neither are there internal labels and language choice. Notably, each search has specific keywords that the user has to insert so that he/she realizes the intended results. For instance, if one is doing online research on social issues, it would be non-practical for the person to use legal, medical, and religious jargon in the search.

He/she has to stick to the social aspects, especially those related to the particular search. Also, a word choice that is not related to the primary search information would give undesirable results, thus could lead to confusion during the search (Sharp, Rogers & Preece 2007, p. 87). In such cases, the person cannot get useful information and might discredit the website for lacking information about the website.

On issues relating to readability, the website pages are simple to read because there is a good use of headings and subheadings, bulleted lists, and bolded text that could be used as key phrases in finding more and right information. The highlights are properly arranged in columns and rows and a list of categories, in which the software products are classified. However, it would important to recognize and state categorically that the website IBM.com has more graphics.

The contents of both websites are up-to-date. The new content is displayed and dated. Also, some of the software products and their latest prices, discounts, and mode of payment are displayed for prospective buyers. Moreover, the buyers are allowed to make online inquiries about software products and even make orders (Shneiderman & Plaisant 2010, p.38).

Looking at the home pages of the website, visual cues are showing that the sites are regularly maintained or up-to-date. For example, there are pope visual adverts, which keep on changing. This indicates that the website is being maintained regularly. Furthermore, the latest price updates are shown and keep changing, thus a clear indication that its maintenance is done on daily basis. In IBM.com, there are no visual pop-ups, but the appearance of the software products in terms of the color of the products. The other feature such as offers, software products of the season among other latest attributes is shown.

When the design of the website is examined carefully, they reflect the company culture and profession. For instance, IBM.com gives the various categories of items that they offer online such as software products, e-Reading, lifestyles, paper shops, gifts, toys among others. Alternatively, IBM.com gives options for the user to search for software products, software, electronics, and others. Besides, the items are arranged in an orderly manner and other related links are provided. This makes the websites have a professional look. Likewise, privacy policies, copyright and legal notices, terms of use give the websites a company and professional outfit (Shneiderman & Plaisant 2010, p.76).

About the visuals, the user notes that the layout, colors, typeface reflect the purpose of the site, company culture, and visitors expectations. This is because; the tools are somewhat user friendly. Sometimes, even the new user might find the websites easier to use.

Notably, visuals portrayed to give the proper ingression of the items that the client wanted to see in his worksheet. Additionally, most of the things including the query for searching information are perfectly aligned either in the users expectations or purpose of the text. Even though the visual presentations and contents of the website are more user-friendly, some interactive components should be incorporated to give the users (Sharp, Rogers & Preece 2007, p. 28). Some of the visual contents are represented as shown below.

Visual contents.

Types of consumer experiences that brands focus on to open and participate in the social website include engaging, networked experience, on-demand of the users (Grudin 2012, p.57). The networked experience is about self-expression, ego gratification, portability, community, and meaningful change. On the website, this includes things like ratings and reviews, crowd-sourcing, and consumer-generated content, which are expressed very well in IBM.com. On the other hand, IBM.com creates the networking experience through the blog posts, where the readers could express their opinion in line with the clients interest in seeing some ratings and comments about the products.

The on-demand experience is about efficiency, ease, control, accessibility, and instantaneousness. In essence, this includes things like on-site search, store locator features and RSS feeds. As given on the website, they have a well-developed search engine that can find a lot of information related to the keyword(s) inserted.

Considering the clients worksheet, this might satisfy his/her expectation on matters relating to searching the software products using the keywords, getting the prices of different software products to compare them, and make decisions based on the findings. It might also help the client to get the software products quickly, know whether the products are on soft or hard copy and another variable of different items offered on the website.

Another important aspect of the OPEN brand metric System is the users experience with acknowledgment, dialogue, customization, privilege, and popularity. Therefore, considering the website, IBM.com has helpful features, which include the Contact Us page, individualized recommendations, and surveys. This seems to answer the clients concern on areas such as readers recommendations on different software products, the popular ones, and the visual representation as shown in the websites (Dix, Finlay, Abowd & Beale 2003, p. 27). IBM also has user-friendly features that enhance the clients experience.

Recommendation

To improve on the IBM website content, it is important to incorporate more specific key words to facilitate faster retrieval of the required information. As a result, this will save time for searching the required contents. Even though the company has an online support system, it should also incorporate the use of modern features such as Skype, which are more user-friendly.

Conclusion

In summary, the IBM website has various important features that are necessary for the user/client interaction, for instance, the OPEN brand metric system components that make it possible for the users to engage in an interactive dialogue. This has been made possible through the Contact Us page, where communication between the user and the company can be made possible. The company has also online support system features, which facilitate interactive communication.

References

Dix, A, Finlay, J, Abowd, G & Beale, R 2003, Human-Computer Interaction, Prentice-Hall, New York.

IBM: Why IBM SmartCloud for Social Business 2012. Web.

Grudin, J 2012, A Moving Target: The Evolution of Human-Computer Interaction, Taylor & Francis Taylor, New York.

Raskin, J 2000, The Humane Interface: New directions for designing interactive systems, Addison-Wesley, Boston.

Sears, A & Jacko, J 2007, Human-Computer Interaction Hand products, CRC Press, Boston.

Sharp, H, Rogers, Y & Preece, J 2007, Interaction Design: Beyond Human-Computer Interaction, John Wiley & Sons Ltd, Boston.

Shneiderman, B & Plaisant, C 2010, Designing the User Interface: Strategies for Effective Human-Computer Interaction, Pearson Addison-Wesley, New York.

Posted in IBM

IBM Website and Human-Computer Interaction

Introduction

Globally, IBM is among the leading firms in technology. It operates in more than 170 countries and deals with the invention, development, and integration of computer software and hardware provisions. Additionally, its services seek to improve the efficiency of various organizations for enhanced competitiveness and growth. IBMs products have transformed many organizations in Australia for nearly 80 years.

In the Australian market, it promotes digital connectivity, the development of sustainable cities, and the use of novel resources. It also invests in the Australian community and offers diverse employment opportunities to many Australians (IBM 2012). This paper provides an overview and description of IBMs website. Concurrently, it considers features of the website, discerns both positive and negative aspects of the website, captures opinions of other parties regarding the usability of the website, and provides appropriate recommendations relevant in this context. It gives an in-depth evaluation of IBMs website to determine its suitability for public use.

Additionally, it examines the features of the website about computer-human interaction. IBM uses its website to communicate its activities to the general public. Due to the diverse nature of its activities, it is recommendable for the mentioned website to exhibit all its activities and also show its technological advancements (Lopuck 2012, p. 77).

Main Features

The main features of the website incorporate the provision of key words (for navigation) and appropriate links to other areas. This enables clients to access important information with ease. The words that have been made bold and clear on the homepage include IBM, Solutions, Services, Product & Support, and Download. These make it easier for IBM and its website users to interact considerably. The other key feature is the use of large pictures on the website. This has the effect of creating curiosity. Pictures and appropriate color codes can attract the attention of users. Appealing visualizations are also informative and easy to understand (Cards, Mackinlay & Schneiderman 1999, p.562).

Another vital feature is the presence and use of links at the bottom of the website. These links allow viewers to access more information regarding IBM. They link users to information about the firm, business executives, key news, and how clients may shop. These provisions are user friendly. Additionally, the website contains slide shows containing different information. Finally, the website has incorporated links to social sites such as Facebook and Twitter.

This enables clients to connect with the firm and access its products and services online (Hansen, Schneiderman & Smith 2010, p.28). Precisely, the use of appropriate colors, links, texts, and animations has revitalized IBMs website and user interface.

Positive and Negative Aspects of the Website

Positively, the ability of IBM to display vital information through a well-orchestrated website is crucial. The firm has numerous products and services. Besides, it is involved in many other activities, a fact that renders its website viable. Its user interface is comprehensive and viably created. The use of bold and clear texts also eases the usage of the website. It makes it easy for users to access IBMs products and services online.

Additionally, users can seek assistance from IBM representatives. The websites linkage to social sites increases the interaction between IBM and its clients. This is mutually beneficial (Lopuck 2012, p. 77). The links to social sites increase the public knowledge of IBM and increase traffic to IBMs website. Similarly, social sites increase the favorable perception of the firm and its products, enable easy marketing, aid monitoring of conversations about IBM, and improve insights in the targeted markets (Hansen, Schneiderman & Smith 2010, p.28). Additionally, incorporated slide shows make the usage of the website easy as slides are faster and comprehensible. The mentioned slides make it easy for users to navigate the website. Consequently, they can easily find the required information.

The pages of the websites interface are similar. There is an easy reversal of actions. This makes the use of the website easier compared to others. Even though the display of various texts on the website is a positive aspect, it has the potential of confusing viewers. The display of these texts on the website necessitates the use of small fonts. This is strenuous to some users of the website thus a negative aspect. Additionally, the other negative aspect of the website is that the multiple paths in the interface confuse some of the users of the site (Sears & Jacko 2009, p.193). The other negative aspect of the website is the use of dark colors, mainly black and dark blue.

The Views of Other People

It is important to know how other people perceive IBMs website in the realms of its usability and viability. The two respondents engaged (interviewed) to extract this information were my uncle who is a banker and my brother who is a student. My uncles views about the website mainly focused on the content of the webpage. He said that the website was appropriate for IBM. It contained detailed information about IBM as a company, its products and services, and other relevant content. He noted that the boldness of key texts such as the Products and Services made it easy to use the website (Smith-Atakan 2006, p. 31).

An interesting observation that he made regarding the interface is the choice of dark colors. His views were that the black and dark-blue colors were appropriate since IBM is a globalized organization. The combination of these two colors indicated the productivity and prospected prosperity of IBM. He specifically said the two have been used to influence the perceptions of individuals who visit the site.

Concurrently, the views of my brother mainly focused on the usability of the site and the linkages of the site to social sites. He felt that the use of slide shows made it easy to find information quickly. Additionally, the multiple links applied in this context were modern. He also said that the inclusion of social site links enables the firm to popularize its products and services since the sites have become more popular in recent years.

He added that the website has visualizations that are easy to understand and are highly informative (Chen 2001 p.135). The views of the two were similar to mine with minor disparities. I held the same views that my uncle had. The boldness of the key texts is important and much information is also necessary to be included. I also held similar views of my brother especially the inclusion of the links to social sites and the use of slide shows. It is crucial to agree that various opinions regarding the structure and use of this website can discern the aspects of its usability and other considerable provisions.

Recommendations on Necessary Improvements of the Websites

Generally, IBM Australias website has good features and enables easy interaction and usage. However, it is crucial to front viable recommendations on improvements that may be made to advance the usability and interaction of its interface. The first recommendable change is the increase in the sizes of fonts used on the site. This would make it less strenuous for users who are not able to see small fonts. Secondly, an improvement that the IBM team may make on the website to improve its interaction with humans and its usability is the incorporation of audio assistance to the users (Shneiderman 1998, p. 84).

These may include directions to the users on how to log in or on how to shop and so on. Lastly, I recommend that more links to social sites be added since social behaviors are a fundamental part of human operations and have many positive benefits to the business. The inclusion of the links will improve the interactions of humans with computers and will make the experience pleasurable. Colors used on the website should be enhanced to augment visual appeals. It is crucial to consider these provisions to enhance the usability of this website. Its interaction with the public can determine the fates of IBM in the realms of international business.

Conclusion

It is important to consider the viability and usability of any given website. Contextually, IBM is a large organization handling numerous technological commodities. It is involved in many other activities such as research, innovation, and invention. Therefore, the company needs a website that can display as much information as possible to the public. In this context, IBMs website needs to be interactive and easy to use.

This paper took an in-depth analysis of the website of IBM in Australia with the main focus being on its interactive features and usability. One of the main features of the website is the use of bold and large fonts in the display of key texts. They enhance the usability of the concerned interface. The other feature is the use of large pictures that arouse the curiosity of users and also pass a lot of information. Additionally, the inclusion of links to social network sites such as Twitter and Facebook in IBMs website increases the human-computer interaction. Additionally, the slideshows and the list of links at the bottom of the webpage increase the usability of the website.

The website has various positive and negative aspects. These relate to its ability to interact with humans. The display of so much information on the website and the use of bold and conspicuous texts are positive aspects of the website. They make it easy for people to access considerable information. The other positive aspect of the website is the inclusion of links to social sites and the use of slide shows to display different information.

A negative aspect of the site is that numerous texts used in this have the potential of confusing users. In addition to this, there are multiple paths in the interface. This makes the use of the interface difficult for some people. Generally, the website has good features and enables easy interaction and usability. It is recommendable that the inclusion of audio assistance and an increase in the fonts used can make the website more interactive and easier to use.

List of References

Card, S., Mackinlay, J & Shneiderman, B 1999, Readings in information visualization: using vision to think, Morgan Kaufmann, San Francisco, CA.

Chen, Q 2001, Human-computer interaction: issues and challenges, Idea Group Publishers, Hershey, PA.

Hansen, D., Schneiderman, B & Smith, M 2010, Analyzing social media networks with NodeXL insights from a connected world, Morgan Kaufmann, San Francisco, CA.

IBM 2012, IBM website. Web.

Lopuck, L 2012, Web design for dummies, John Wiley & Sons, Hoboken, NJ.

Sears, A & Jacko, J 2009, Human-computer interaction. Designing for diverse users and domains, CRC Press, Boca Raton.

Shneiderman, B 1998, Designing the user interface: strategies for effective human-computer-interaction, Addison Wesley Longman, Reading, MA.

Smith-Atakan, S 2006, Human-computer interaction, Thomson, London.

Posted in IBM

The IBM Company Management of Information System

Introduction

According to Hofstedes five dimensional model regarding national or organizational culture namely Power Distance, Uncertainty Avoidance, Individualism, Masculinity and Long Term Orientation (2005), the emphasis indicates that performances and dimensional adversity of a company depends on its level of aggression. There is urgent need for any company to ensure it avoids blending differentiated values especially at the individual levels.

What is an organization culture? It is a programming done through thoughts as a combination of executable set of instructions and rules, which distinguishes member groups or one company from others. It is attributable to a collective phenomenon connected to different aspects and departments in the organization or firm for the purpose of guidance through its undertakings. One acquires organization cultures from the social setting of a firm or organization.

When a person changes the job, the organization culture also changes but only to certain levels because the concept following many structure rally behind the business standards, regulations or policies. The organization culture is not like the other cultures because it allows employees to practice their business undertakings through implementation of the set regulations.

Methodology

The report of IBM company setting adopt a methodology of critically analysis of the available or already existing literature based on performance and business transactions. Secondly is the re-examinations and study of the literature from experts observations regarding the topics especially those who are directly involved with the implementation into the business world. Good analysis involves companies that face high affects by the circumstances and are still in the market segments.

E-commerce and its issues

Today the designers of products, manufactures and, marketers are in different locations and often need to complete transactions through the net. Their information requires proper security for an effective collaboration.

Many large companies such today have various engagements through the internet for the daily routines. Substantial investment on the security of information is therefore crucial as part of extending enterprises. The big companies invest heavily on the security compared to the smaller companies but the threats befall all the companies, arguably more sophisticated and disproportionate on smaller firms.

The security risks entails, disruptions or delays especially on companies with extended enterprises. Other risks may include stolen patent laws, compromise of data integrity and others that are worse such as total loss and manipulation of information to gain profits.

Most executives do not understand the viability of the information security and therefore the factors that drive the firms into the investment include; customer requests, government regulations and business policies or requirements. In the marketplace, some companies quantify the investment in information security as a competitive advantage because customers feel comfortable transacting in such firms.

Information security/threats

What are the benefits of investing in Information Security? Does the investment decrease the cost and risks? The insurance companies, arbitrageurs and, financial trading firms manage risks by understanding the methodologies behind the risks experienced by the client firms&. (Schneier, 2009).

A firm is able to quantify its value through evaluation of performance or in the course of comparison between the prospective projects and the pending work. The quantitative risk analysis shows how the control adds value in a reputable and comparable manner.

There are quite number of benefits for a firm that invests in information security. The ability to cope effectively with todays computer threats such as computer viruses/worms, computer threats such as web hacking/breakings/defacement, internet disruptions and other cyber events is at the verge.

Insecurity has caused the delivery of products in the virtual market complex and progressively slow since people want to countercheck the viability of a transaction before engaging in it. Threats or risks are all over the virtual transactions.

By investing in security control measures, a company is able to safeguard its productivity. For instance, what amount of time employees spent surfing the net while they are supposed to be working on viable projects? How much productivity is lost due to downtime? The ability to monitor such events enables the business to prosper. Secondly, the company is able to control the data losses through investment on high backup solutions on the server.

It is easy to get the better of cost of reworking through these backup solutions. It is very difficulty to quantify the amount of damages linked to data through malicious code. As a defence and counter measure process, it is important to note that having a good back up system enhances recovery (Easttom, 2006). It is equally very important for outsourced company to ensure proper backup less they suffer from cost related losses (Kerber, 2007).

Today, system security saves the business proceedings from various performance hitches such as bleach of data or impacts on reputation. Although it is difficult to predict the value of Information Security, it is helpful and important for determine the business value especially when considering a new project. One would compare the amount of useful information available from an existing information security system.

New technology

Today it is eminent that the role of computers has a huge impact on every person. Computerisation has taken over majority of the societal or business roles and has greatly improved lives. The most advanced machines include the intelligent systems that have the ability to control other manual machines. According to Mathias et al (2007), today the manufacture industries such as IBM use computer procedures and information fed to the processing systems to give out the desired outputs.

The computer systems are all designed differently and yet the experts are not in a position to be acquainted with all the involved details. The battle with technology is dynamic and a continuous process expected to bring about new challenges every day. This mainly influences on the marketing and advertising department in a very vibrant manner due to the stable anticipated creativity.

The society is dependent of technology today and having the computers in the workplace, homes and mobility enable effectiveness, and efficiency required for better and all round businesses. The computers cuts down on the production costs and time since there is elimination on majority of the manual tasks involvement.

The technology has advanced to a point where the graphical user and three dimension interfaces are on use. Unlike the earlier days when people had to produce everything practically, the computation skills have amazingly incorporated the touch, voice or click commands. The main aim of most companies is to embrace technological skills since they make human tasks easier to undertake without taking over their lives.

Arguably, it is very hard for the intelligent systems to take over the society since the human beings makes and controls them with the aim of improving on the quality of life. It is difficult to over-throw the human intelligence and creativity. Today, inventiveness and resourcefulness especially in the business sectors such as use of internet enables resourceful communication and sharing of information.

Although the tasks involving computation are expensive, especially the initial start up cost, the advancement engagements, the output as well as the input requirements makes the whole system cheaper and much easier or appreciative due to the quality and quantity of output.

Business ethics and social issues

Avoiding Uncertainties

This aspect relates to the amount or level of stress or strain one maybe undergoing especially in a social setting where the future exhibits unknown possibilities. For instance, the technological changes of the company may pose a lot of strain and stress to the employees whose new requirements may entail retraining to keep to the standards. The employees therefore face competitive disadvantages from the well-trained and up-to-date prospective employees seeking considerations at similar positions.

The uncertainty avoidance cannot be a close link to the business risk avoidance. It indicates the social tolerance over ambiguity. It is the determinant factor over the extent to which an employee can persevere comfortable or the uncomfortable job related or social situations in the employment setting. Some people will avoid organizational rules to minimize the unstructured circumstances such as unanticipated or unknown situations that are different from the norm.

Research indicates that people in the uncertainty defiance conditions are usually emotional and their motivations emerge from personal inner nervousness. (Hofstede and Hofstede, 2005) Other people accept the uncertainty and end up being more tolerant over opinions that are different of their expectations. These people are torrent to any organizational change and are willing to go by the organizational change regardless of their difference of opinion, thought or contempt.

Individualism

Employees working in a large company such as the IBM are bond by the collectivism or group work. At an individual level, employees have certain cultural regulations that loosen the ties between the groups. The dimensional difference in a company addresses the bond issue. Some social group may have strong integration probably due to their social bonds, generation links or loyalty.

Masculinity vs. Feminism

These are issues relating to the division between the gender related emotions. The subject takes rationality over individual attribute whereby the distribution of values ought to ensure parity as a fundamental and societal issue. The study of the IBM indicates that the feminine aspects have lower differences among various societies compared to the male values.

Among the different countries where the IBM Company has branches, the male values are dominant in the line of assertiveness and competitiveness. On the other hand, women have almost similar values focusing on modesty and care.

According to Hofstede and Hofstede (2005), the reference of assertiveness in the model is Masculine while the modesty among females is Femininity. Although the issue may have, the social influences such that the competitive situations that changes the feminine perspective to narrow the gap. Sometimes the differences in the social cultures have deep-rooted values such as taboos that may also influence adversity especially among women.

Business Strategy/initiative

Today most companies such as the IBM have realized the key role of the HR departments. To increase profitability, the HR department should be the key business strategic planner through deployment of the technology such as the web-based analysis of economic factors influencing the business.

Discovery of knowledge as the key resource for the department is the basis for the future predictions that HR management will necessitate for new and radical management strategies and practices. The field of information technology should replace firms important administrative tasks entitled to HR department today.

The managers in the HR department should be the strategic business associates who ought to ensure the business gains from its planning strategies. According to Mathias et al, (2007) the HR management system should have the sole role of ensuring maximization of profit margins through enhanced quality and technology based on human management as a way of creating value to the organization

The aspects of internetworking required for the HR management include intranet technology, electronic education for the clients as well as the employees, self-service for the client, gathering of the clients response, reactions or comments virtually through surveys and electronic comments.

Future Human Resources plans

Inline with Daft and Marcic, (2008) the future implication and planning for technological advancement as a business strategy means advancement in employees productivity in the aim of general performance.

During the integration of the information technology into business, the aim is to eventually manage and improve the customers relationship, manage the intelligence in the business setting, plan for the resources, manage people especially in terms of the knowledge, supply chain management, enhancement of electronic trade as well as and supporting decision making procedures (Daft and Marcic, 2008).

Most businesses have attained majority of these aspects but the future expectations for IBM are that the system to be in a position of promoting all the requirements. Any future endeavours of IT related departments are to generate improved performance/effectiveness through reduction on cost but maximization of profit margins. (Mathias et al, 57)

Today utilizing the available technology is inevitable, the question remain whether the companies are utilizing the technology in the right manner especially for the human resource departments.

According to Daft and Marcic (2007), what lies ahead is a human resource department focused on a knowledge based economy, where the race remain between the rivalry companies over fast learning and flexible organization with the aim of taking the advantage over the already technologically acquainted markets. The technological advancement allows the companies to be in a position of collaborating and exchanging information over contraction or stockholding.

Major future understanding of the IT integrated in the HR management entails definition of the intended and anticipated outcomes. The current increased usage of technology in the workplace shows that it is inevitable for the HR departments to adopt the web-based systems as a business initiative.

Conclusion

Today, recovering a business from a disaster is a common phenomenon in systems management. For many years, it has been a long and risky procedure for the overall business practice unless one used the business continuity arrangements to get extra information for carrying on the disaster recovery process. Management of information system can benefit IBM because of exposure to economical capabilities and availability of resources.

The new management system entails outsourcing a disaster recovery team. This optimal procedure is mainly economical and outstanding. The IBM enjoys the advantages of availability of some standard backup procedures of storing information and other important assets. Incorporation of continuity or disaster recovery plans as necessary tools for backing-up management procedures allows IBM business policies to authorize compliance.

Effective business continuity plan entails management sponsorship and, compliance with polices and procedures. The process is mainly dependent of corporate senior management in the firm. The complexity of businesses as highlighted by the human errors involves compliance measures to determine the critical need for the firm to overcome vulnerability to malfunctions.

A good management system focuses on impact and risk analysis of the business. It is a direct contribution to businesss disaster recovery plan. The plan for business management system therefore gives support from all the firms departments through incorporation of policies as part of standard operating procedures, to measure the competencies of these functional areas.

References

Daft, R .L, & Marcic, D., 2008. Understanding Management. Kentucky: KY. Cengage Learning Publishers

Easttom, C., 2006. Network Defense and counter measures principles and practices: Security Series. Pearson Prentice Hall

Hofstede, G., and Hofstede, G.J., 2005. Cultures and Organizations: Software of the Mind. (Second Edition). New York, NY: McGraw-Hill

Kerber, R., 2007. Cost of Data Breach at TJX sours to $256m. [online]. Available at from

Mathis, R, L., Jackson, J. H., and Elliott, T. L., 2007. Human Resource Management. Thomson Southwestern Publishers

Schneier, B., 2009. Security ROI Fact and Fiction. [online]. Available at

Posted in IBM