Sunday, July 31, 2011

Typing


Typing is the process of inputting text into a device, such as a typewritercell phonecomputer, or a calculator, by pressing keys on akeyboard. It can be distinguished from other means of input, such as the use of pointing devices like the computer mouse, and text input viaspeech recognition.
The world's first typist was Lillian Sholes from Wisconsin.She was the daughter of Christopher Sholes, the man who invented the first practical typewriter
User interface features such as spell checkerautocomplete, and autoreplace serve to facilitate and speed up typing and to prevent or correct errors the typist may make.
A computer keyboard is divided into sections according to function type. The alphanumeric and typing keys function in much the same way as a typewriter. The numeric keypad works like a traditional adding machine so you can enter numbers quickly. The control keys, used either alone or in combination with other keys, perform specific actions. The function keys each perform a specific program task. The navigation keys move your cursor around a document or Web page.
HOmE KEys
  • Each finger rests on a particular key in the home row of the keyboard when not typing, in order to keep "grounded" and oriented at all times. The home keys (ASDF JKL; ) are outlined in yellow on the above diagram. The thumbs remain in the air, or very gently in contact with the keys below.
  • Each finger is responsible for a vertical column of keys, which you can think of as a "home column". The column is not straight up and down, but rather slopes up to the left.
  • Both index fingers are responsible for an additional column, the one next to their home columns towards the middle of the keyboard.
  • The thumbs are used for the space bar, and depending on the shape of your keyboard can also be used for the "command" (Apple computers) or "Windows" (PCs) key.
  • The left-hand pinky is also responsible for all the keys to the left of its home column, including the left shift key, caps lock, tab, tilde, escape and others.
  • The right-hand pinky is a real workhorse, covering everything to the right of its home column. Take a look - there's a lot of stuff there! 
  • typing is real fun!!!

SoftWAreS

SOtWAREs   aRE,,,,

Computer instructions or data. Anything that can be stored electronically is software. The storage devices and display devices are hardware.
The terms software and hardware are used as both nouns and adjectives. For example, you can say: "The problem lies in the software," meaning that there is a problem with the program or data, not with the computer itself. You can also say: "It's a software problem."
The distinction between software and hardware is sometimes confusing because they are so integrally linked. Clearly, when you purchase a program, you are buying software. But to buy the software, you need to buy the disk(hardware) on which the software is recorded.
Software is often divided into two categories:

  • systems software : Includes the operating system and all theutilities that enable the computer to function.

  • applications software : Includes programs that do real work forusers. For example, word processorsspreadsheets, and database management systems fall under the category of applications software.System software is computer software designed to operate the computer hardware and to provide a platform for running application software.[1][2]
    The most basic types of system software are:
    In some publications, the term system software is also used to designate software development tools (like a compilerlinker or debugger).[3]
    In contrast to system software, software that allows users to do things like create text documents, play games, listen to music, or surf the web is called application software.[4]

    [edit]

    What is Application Software?
    Application software utilizes the capacities of a computer directly to a dedicated task. Application software is able to manipulate text, numbers and graphics. It can be in the form of software focused on a certain single task like word processing, spreadsheet or playing of audio and video files.

    Different Types of Application Software

    Word Processing Software: This software enables the users to create and edit documents. The most popular examples of this type of software are MS-Word, WordPad, Notepad and some other text editors.

    Database Software: Database is a structured collection of data. A computer database relies on database software to organize the data and enable the database users to achieve database operations. Database software allows the users to store and retrieve data from databases. Examples are Oracle, MSAccess, etc.

    Spreadsheet Software: Excel, Lotus 1-2-3 and Apple Numbers are some examples of spreadsheet software. Spreadsheet software allows users to perform calculations. They simulate paper worksheets by displaying multiple cells that make up a grid.

    Multimedia Software: They allow the users to create and play audio and video media. They are capable of playing media files. Audio converters, players, burners, video encoders and decoders are some forms of multimedia software. Examples of this type of software include Real Player and Media Player.

    Presentation Software: The software that is used to display information in the form of a slide show is known as presentation software. This type of software includes three functions, namely, editing that allows insertion and formatting of text, methods to include graphics in the text and a functionality of executing the slide shows. Microsoft PowerPoint is the best example of presentation software.

    Examples of Application Software

    Enterprise Software: It deals with the needs of organization processes and data flow. The customer relationship management or the financial processes in an organization are carried out by means of enterprise software.

    Information Worker Software: Individual projects within a department and individual needs of creation and management of information are handled by information worker software. Documentation tools, resource management tools and personal management systems fall under the category of this form of application software.

    Educational Software: It has the capabilities of running tests and tracking progress. It also has the capabilities of collaborative software. It is often used in teaching and self-learning.

    Simulation Software: Used to simulate physical or abstract systems, simulation software finds applications in both, research and entertainment. Flight simulators and scientific simulators find a place in the list of simulation software.

    Content Access Software: It is used to access content without editing. The common examples of content access software are web browsers and media players.

    Thus we see that application software have made it possible for us users to interact with the computer systems. Application software has served as a boon in harnessing the computing power in the accomplishment of certain important individual and organizational tasks.


  • HiSToRy Of cOMpuTeRS

          Where do you think the history of computers begin?


    One of the earliest machines designed to assist people in calculations was the abacus which is still being used some 5000 years after its invention.
    In 1642 Blaise Pascal (a famous French mathematician) invented an adding machine based on mechanical gears in which numbers were represented by the cogs on the wheels.
    Englishman, Charles Babbage, invented in the 1830's a "Difference Engine" made out of brass and pewter rods and gears, and also designed a further device which he called an "Analytical Engine". His design contained the five key characteristics of modern computers:-
    1. An input device
    2. Storage for numbers waiting to be processed
    3. A processor or number calculator
    4. A unit to control the task and the sequence of its calculations
    5. An output device
    Augusta Ada Byron (later Countess of Lovelace) was an associate of Babbage who has become known as the first computer programmer.
    An American, Herman Hollerith, developed (around 1890) the first electrically driven device. It utilised punched cards and metal rods which passed through the holes to close an electrical circuit and thus cause a counter to advance. This machine was able to complete the calculation of the 1890 U.S. census in 6 weeks compared with 7 1/2 years for the 1880 census which was manually counted.
    In 1936 Howard Aiken of Harvard University convinced Thomas Watson of IBM to invest $1 million in the development of an electromechanical version of Babbage's analytical engine. The Harvard Mark 1 was completed in 1944 and was 8 feet high and 55 feet long.
    At about the same time (the late 1930's) John Atanasoff of Iowa State University and his assistant Clifford Berry built the first digital computer that worked electronically, the ABC (Atanasoff-Berry Computer). This machine was basically a small calculator.
    In 1943, as part of the British war effort, a series of vacuum tube based computers (named Colossus) were developed to crack German secret codes. The Colossus Mark 2 series (pictured) consisted of 2400 vacuum tubes.
    Colossus Mark 2               (photo in public domain - copyright expired)
    John Mauchly and J. Presper Eckert of the University of Pennsylvania developed these ideas further by proposing a huge machine consisting of 18,000 vacuum tubes. ENIAC (Electronic Numerical Integrator And Computer) was born in 1946. It was a huge machine with a huge power requirement and two major disadvantages. Maintenance was extremely difficult as the tubes broke down regularly and had to be replaced, and also there was a big problem with overheating. The most important limitation, however, was that every time a new task needed to be performed the machine need to be rewired. In other words programming was carried out with a soldering iron.
    In the late 1940's John von Neumann (at the time a special consultant to the ENIAC team) developed the EDVAC (Electronic Discrete Variable Automatic Computer) which pioneered the "stored program concept". This allowed programs to be read into the computer and so gave birth to the age of general-purpose computers.
    Tubes from a 1950s comupter      (source - http://en.wikipedia.org/wiki/File:Ibm-tube.jpg)

    The Generations of Computers

    It used to be quite popular to refer to computers as belonging to one of several "generations" of computer. These generations are:-
    The First Generation (1943-1958): This generation is often described as starting with the delivery of the first commercial computer to a business client. This happened in 1951 with the delivery of the UNIVAC to the US Bureau of the Census. This generation lasted until about the end of the 1950's (although some stayed in operation much longer than that). The main defining feature of the first generation of computers was that vacuum tubes were used as internal computer components. Vacuum tubes are generally about 5-10 centimeters in length and the large numbers of them required in computers resulted in huge and extremely expensive machines that often broke down (as tubes failed).
    The Second Generation (1959-1964): In the mid-1950's Bell Labs developed the transistor. Transistors were capable of performing many of the same tasks as vacuum tubes but were only a fraction of the size. The first transistor-based computer was produced in 1959. Transistors were not only smaller, enabling computer size to be reduced, but they were faster, more reliable and consumed less electricity.
    The other main improvement of this period was the development of computer languages.Assembler languages or symbolic languages allowed programmers to specify instructions in words (albeit very cryptic words) which were then translated into a form that the machines could understand (typically series of 0's and 1's: Binary code). Higher level languages also came into being during this period. Whereas assembler languages had a one-to-one correspondence between their symbols and actual machine functions, higher level language commands often represent complex sequences of machine codes. Two higher-level languages developed during this period (Fortran and Cobol) are still in use today though in a much more developed form.
    The Third Generation (1965-1970): In 1965 the first integrated circuit (IC) was developed in which a complete circuit of hundreds of components were able to be placed on a single silicon chip 2 or 3 mm square. Computers using these IC's soon replaced transistor based machines. Again, one of the major advantages was size, with computers becoming more powerful and at the same time much smaller and cheaper. Computers thus became accessible to a much larger audience. An added advantage of smaller size is that electrical signals have much shorter distances to travel and so the speed of computers increased.
    Another feature of this period is that computer software became much more powerful and flexible and for the first time more than one program could share the computer's resources at the same time (multi-tasking). The majority of programming languages used today are often referred to as 3GL's (3rd generation languages) even though some of them originated during the 2nd generation.
    The Fourth Generation (1971-present): The boundary between the third and fourth generations is not very clear-cut at all. Most of the developments since the mid 1960's can be seen as part of a continuum of gradual miniaturisation. In 1970 large-scale integration was achieved where the equivalent of thousands of integrated circuits were crammed onto a single silicon chip. This development again increased computer performance (especially reliability and speed) whilst reducing computer size and cost. Around this time the first complete general-purpose microprocessor became available on a single chip. In 1975 Very Large Scale Integration (VLSI) took the process one step further. Complete computer central processors could now be built into one chip. The microcomputer was born. Such chips are far more powerful than ENIAC and are only about 1cm square whilst ENIAC filled a large building.
    During this period Fourth Generation Languages (4GL's) have come into existence. Such languages are a step further removed from the computer hardware in that they use language much like natural language. Many database languages can be described as 4GL's. They are generally much easier to learn than are 3GL's.
    The Fifth Generation (the future): The "fifth generation" of computers were defined by the Japanese government in 1980 when they unveiled an optimistic ten-year plan to produce the next generation of computers. This was an interesting plan for two reasons. Firstly, it is not at all really clear what the fourth generation is, or even whether the third generation had finished yet. Secondly, it was an attempt to define a generation of computers before they had come into existence. The main requirements of the 5G machines was that they incorporate the features ofArtificial Intelligence, Expert Systems, and Natural Language. The goal was to produce machines that are capable of performing tasks in similar ways to humans, are capable of learning, and are capable of interacting with humans in natural language and preferably using both speech input (speech recognition) and speech output (speech synthesis). Such goals are obviously of interest to linguists and speech scientists as natural language and speech processing are key components of the definition. As you may have guessed, this goal has not yet been fully realised, although significant progress has been made towards various aspects of these goals.

    Parallel Computing

    Up until recently most computers were serial computers. Such computers had a single processor chip containing a single processor. Parallel computing is based on the idea that if more than one task can be processed simultaneously on multiple processors then a program would be able to run more rapidly than it could on a single processor. The supercomputers of the 1990s, such as the Cray computers, were extremely expensive to purchase (usually over $1,000,000) and often required cooling by liquid helium so they were also very expensive to run. Clusters of networked computers (eg. a Beowulf culster of PCs running Linux) have been, since 1994, a much cheaper solution to the problem of fast processing of complex computing tasks. By 2008, most new desktop and laptop computers contained more than one processor on a single chip (eg. the Intel "Core 2 Duo" released in 2006 or the Intel "Core 2 Quad" released in 2007). Having multiple processors does not necessarily mean that parallel computing will work automatically. The operating system must be able to distribute programs between the processors (eg. recent versions of Microsoft Windows and Mac OS X can do this). An individual program will only be able to take advantage of multiple processors if the computer language it's written in is able to distribute tasks within a program between multiple processors. For example, OpenMP supports parallel programming in Fortran and C/C++.
    Besides, the computers before are getting smaller and smaller......