В.С. Слухинська, І.Ф. Шилінська 


Мы поможем в написании ваших работ!



ЗНАЕТЕ ЛИ ВЫ?

В.С. Слухинська, І.Ф. Шилінська



В.С. Слухинська, І.Ф. Шилінська

 

Англійська мова для професійного спілкування

 

Навчальний посібник

 


 

 

TOPIC 1.

What is a computer?

 

Computers affect your life every day and will continue to do so in the future. New uses for computers and improvements to existing technology are being developed continually.

The first question related to understanding computers and their impact on our lives is, “What is a computer?” A computer is an electronic device, operating under the control of instructions stored in its own memory unit that can accept data (input), process data arithmetically and logically, produce results (output), and store the results for future use. Most computers also include the capability to communicate by sending and receiving data to other computers and to connect to the Internet. While different definitions of a computer exist, this definition includes a wide range of devices with various capabilities. Often the term computer or computer system is used to describe a collection of devices that function together to process data.

Data is input, processed, output and stored by specific equipment called computer hardware. This equipment consists of input devices, a system unit, output devices, a system unit, output devices, storage devices, and communications devices.

Input devices are used to enter data into a computer. Two common input devices are the keyboard and the mouse. As the data is entered using the keyboard, it is temporarily stored in the computer’s memory and displayed on the screen of the monitor. A mouse is a type of pointing device used to select processing options or information displayed on the screen. The mouse is used to move a small symbol that appears on the screen. This symbol, called a mouse pointer or pointer, can be many shapes but is often in the shape of an arrow.

The system unit is a box-like case that contains the electronic circuits that cause the processing of data to occur. The electronic circuits usually are part of or are connected to a main circuit board called the motherboard or system board. The system board includes the central processing unit, memory and other electronic components. The central processing unit (CPU) contains a control unit that executes the instructions that guide the computer through a task and an arithmetic/logic unit (ALU) that performs math and logic operations. The CPU is sometimes referred to as the processor.

Memory also called RAM (Random Access Memory) or main memory, temporarily stores data and program instructions when they are being processed.

Storage devices, sometimes called secondary storage or auxiliary storage devices, store instructions and data when the system unit is not using them. Storage devices often function as an input source when previously stored data is read into memory. A common storage device on personal computers is called a hard disc drive. A hard disc drive contains a high-capacity disc or discs that provide greater storage capacities than floppy discs. A CD-ROM drive uses a low-powered laser light to read data from removable CD-ROMs.

Communication devices enable a computer to connect to other computers. A modem is a communications device used to connect computers over telephone lines. A network interface card (мережевий адаптер) is used to connect computers that are relatively close together, such as those in the same building. A group of computers connected together is called a network.

The devices just discussed are only some of the many types of input, output, storage, and communications devices that can be part of a computer system. A general term for any device connected to the system unit is peripheral device.

Whether small or large, computers can perform four general operations. These four operations are input, process, output, and storage. Together, they comprise the information processing cycle. Each of these four operations can be assisted by a computer’s ability to communicate with other computers. Collectively, these operations describe the procedures that a computer performs to process data into information for immediate use or store it for future use.

All computer processing requires data. Data refers to the raw facts, including numbers, words, images, and sounds, given to a computer during the input operation. In the processing phase, the computer manipulates and organizes the data to create information. As long as information exists only inside our heads, there is no way for it to be processed by a computer. For computer processing, information must be represented by such definite symbols as words, numbers, drawings, and sounds.

Information refers to data that has been processed into a form that has meaning and is useful. The production of information by processing data on a computer is called information processing. During the output operation, the information that has been created is put into some form, such as a printed report or an electronic page that people can use. The information also can be stored electronically for future use.

The people who either use the computer directly or use the information it provides are called computer users, end users, or simply users.


 

Vocabulary

 

auxiliary (AUX) допоміжний, додатковий

capacity ємність, об’єм

circuit схема, мікросхема, ланцюг

device прилад, пристрій

hardware апаратні засоби, апаратура, обладнання;

загальне позначення сукупності фізичних пристроїв

комп’ютера або його окремих частин на відміну від

програм або даних

 

network комп’ютерна мережа. Призначена для спільного використання обчислювальних ресурсів, периферійних пристроїв, застосувань і даних. Мережі класифікуються за географічною ознакою (локальні, кампусні, міські, регіональні, глобальні) топологією, передавальним середовищем, способом комутації тощо.

 

pointing device координатно-вказівний пристрій, позиціювальний

пристрій, вказівний пристрій, маніпулятор; клас

периферійних пристроїв, який застосовують для

переміщення курсору на екрані монітора

to process обробляти

storage зовнішня пам’ять; зовнішній пристрій для зберігання

даних; пам’ять (основна).

to store запам’ятовувати, зберігати

unit пристрій, блок, вузол

 


 

Exercises

 

I. Match words in the text with their definition

 

1. Improvement A. A main circuit board

2. Input B. A control unit together with an arithmetic-logic unit

3. Output C. Making things better

4. Processing D. Something that is put into a computer

5. Motherboard E. Work on information used

6. CPU F. Information retrieval (пошук і вибірка інформації)

 

 

II. Continue the following sentences:

 

1. A computer is an electronic device …

2. Most computers include the capability to communicate by …

3. Input devices are used to …

4. A mouse is a type of …

5. The system board includes the central processing unit …

6. Storage devices often function as …

7. Communication devices enable a computer to …

8. The computer manipulates and organizes the data to create …

 

 

III. Identify whether the following statements are true or false. Use the model:

 

Student A: All computer processing requires data. – Student B: Yes, that is true.

S. A: The arithmetic/logic unit executes the instructions that guide the computer through a task. – S. B: No, you are wrong. It is the control unit’s function. The arithmetic/logic unit performs math and logic operations.

 

1. Computer is a collection of devices that function together to process data.

2. The system board includes the central processing unit and memory.

3. Main memory permanently stores data and program instructions when they are being processed.

4. Information processing cycle comprises input, process and output.

5. For computer processing, information is represented in words, numbers,

are being processed.

 

 

Topics for Discussion

 

Examine your attitude towards computers.

Are they based on personal experience?

Do you fear or distrust computers, and, if so, why?

How do you think people’s attitude towards computers might change as computers become more common at home, at school, and on the job?

 


 

Topic 2.

 

Computer Generations

 

The first Generation, 1951-1958:

The Vacuum Tube

The beginning of the computer age may be dated June 14, 1951. In the first generation, vacuum tubes – electronic tubes about the size of light bulbs were used as the internal computer components. They were used for calculation, control, and sometimes for memory. However, because thousands of such tubes were required, they generated a great deal of heat, causing many problems in temperature regulation and climate control. In addition, all the tubes had to be working simultaneously, they were subject to frequent burnout-and the people operating the computer often did not know whether the problem was in the programming or in the machine. In addition, input and output tended to be slow, since both operations were generally performed on punched cards.

Another drawback was that the language, used in programming was machine language, which uses numbers, rather than the present-day higher-level languages, which are more like English. Programming with numbers alone made using the computer difficult and time-consuming.

Therefore, as long as computers were tied down to vacuum tube technology, they could only be bulky, cumbersome, and expensive.

In the first generation the use of magnetism for data storage was pioneered. For primary storage, magnetic core was the principal form of technology used. This consisted of small, doughnut-shaped rings about the size of a pinhead, which were strung like beads on intersecting thin wires. Magnetic core was the dominant form of primary storage technology for two decades. To supplement primary storage, first-generation computers stored data on punched cards. In 1957, magnetic tape was introduced as a faster, more compact method of storing data. The early generation of computers was used primarily for scientific and engineering calculation rather than for business data processing applications. Because of the enormous size, unreliability, and high cost of these computers, many people assumed they would remain very expensive, specialized tools, not destined for general use.

 

The Second Generation, 1959-1964:

The Transistor

The invention of the transistor, or semiconductor, was one of the most important developments leading to the personal computer revolution. Bell Laboratories engineers John Bardeen, Walter Brattain, and William Shockley invented the transistor in 1948. The transistor, which essentially functions as a solid-state electronic switch, replaced the much less suitable vacuum tube. The transistor revolutionized electronics in general and computer in particular. Not only did transistors shrink the size of the vacuum tube – but they also had numerous other advantages: they needed no warm-up time, consumed less energy, and were faster and more reliable.

The conversion to transistors began the trend toward miniaturization that continues to this day. Today’s small laptop (or palmtop) PC systems, which run on batteries, have more computing power than many earlier systems that filled rooms

During this generation, another important development was the move from machine language to assembly languages. Assembly languages use abbreviations for instructions (for example, “L” for “LOAD”) rather than numbers. This made programming less cumbersome.

After the development of the symbolic languages came higher-level languages. In 1951, mathematician and naval officer Grace Murray Hoper conceived the first compiler program for translating from a higher-level language to the computer’s machine language. The first language to receive widespread acceptance was FORTRAN (for FORmula TRANslator), developed in the mid-1950s as a scientific, mathematical and an engineering language. Higher-level languages allowed programmers to give more attention to solving problems. They no longer had to cope with all details of the machines themselves. Also in 1962 the first removable disc pack was marketed. Disc storage supplemented magnetic tape systems and enabled users to have fast access to desired data.

The rudiments of operating machines were also emerging. Loading programs loaded other programs into main memory from external media such as punched cards, paper tape, or magnetic tape. Monitor programs aided the programmer or computer operator to load other programs, monitor their execution, and examine the contents of memory locations. An input-output control systems consisted of a set of subroutines for manipulating input, output, and storage devices. By calling these subroutines, a program could communicate with external devices without becoming involved in the intricacies of their internal operations.

All these new developments made the second generation of computers less costly to operate – and thus began a surge of growth in computer systems.

 

The Third Generation, 1965-1970:

The Integrated Circuit

One of the most abundant elements in the earth’s crust is silicon, a nonmetallic substance found in common beach sand as in practically all rocks and clay. The element has given rise to the name “Silicon Valley” for Santa Clara County, which is about 30 miles south of San Francisco. In 1965 Silicon Valley became the principal site of the electronics industry making the so-called silicon chip.

In 1959, engineers at Texas Instruments invented the integrated circuit (IC), a semiconductor circuit that contains more than one transistor on the same base (or substrate material) and connects the transistors without wires. The first IC contained only six transistors. By comparison, the Intel Pentium Pro microprocessor used in many of today's high-end systems has more than 5,5 million transistors, and the integral cache built into some of these chips contains as many as an additional 32 million transistors. Today, many ICs have transistor counts in the multimillion ranges.

An integrated circuit is a complete electronic circuit on a small chip of silicon. The chip may be less than 1/8 inch square and contains hundreds of electronic components. Beginning in 1965, the integrated circuit began to replace the transistor in machines now called third-generation computers.

Silicon is used because it is semiconductor. That is, it is a crystalline substance that will conduct electric current when it has been “doped” with chemical impurities shot into the latticelike structure of the crystal. A cylinder of silicon is sliced into wafers, each about 3 inches in diameter, and wafer is “etched” repeatedly with a pattern of electrical circuitry.

Integrated circuits entered the market with the simultaneous announcement in 1959 by Texas Instruments and Fairchild Semiconductor that they had each independently produced chips containing several complete electronic circuits. The chips were hailed as generation breakthrough because they had four desirable characteristics: reliability, compactness, low cost, low power use.

In 1969, Intel introduced a 1K-bit memory chip, which was much larger than anything else available at the time. (1K bits equals 1,024 bits, and a byte equals 8 bits. This chip, therefore, stored only 128 bytes–not much by today’s standards.) Because of Intel’s success in chip manufacturing and design, Busicomp, a Japanese calculator manufacturing company, asked Intel to produce 12 different logic chips for one of its calculator designs. Rather than produce 12 separate chips, Intel engineers included all the functions of the chips in a single chip.

In addition to incorporating all the functions and capabilities of the 12-chip design into one multipurpose chip, the engineers designed the chip to be controlled by a program that could alter the function of the chip. The chip then was generic in nature, meaning that it could function in designs other than calculators. Previous designs were hard-wired for one purpose, with built-in instructions; this chip would read from memory a variable set of instructions that would control the function of the chip. The idea was to design almost an entire computing device on a single chip that could perform different functions, depending on what instructions it was given.

The third generation saw the advent of computer terminals for communicating with a computer from a remote location.

Operating systems (OS) came into their own in the third generation. The OS was given complete control of the computer system; the computer operator, programmers, and users all obtained services by placing requests with the OS via computer terminals. Turning over control of the computer to the OS made possible models of operation that would have been impossible with manual control. For example, in multiprogramming the computer is switched rapidly from program to program in round-robin fashion, giving the appearance that all programs are being executed simultaneously.

An important form of multiprogramming is time-sharing, in which many users communicate with a single computer from remote terminals.

 

The Fourth Generation, 1971-Present:

The Microprocessor

Through the 1970s, computers gained dramatically in speed, reliability, and storage capacity, but entry into the fourth generation was evolutionary rather than revolutionary. The fourth generation was, in fact, an extension of third-generation technology. That is, in the early part of the third generation, specialized chips were developed for computer memory and logic. Thus, all the ingredients were in place for the next technological development, the general-purpose processor-on-a-chip, otherwise known as the microprocessor. First developed by an Intel Corporation design team headed by Ted Hoff in 1969, the microprocessor became commercially available in 1971.

Nowhere is the pervasiveness of computer power more apparent than in the explosive use of the microprocessor. In addition to the common applications of digital watches, pocket calculators, and microcomputers – small home and business computers – microprocessors can be anticipated in virtually every machine in the home or business. (To get a sense of how far we have come, try counting up the number of machines, microprocessor controlled or not, that are around your house. Would more than one or two have been in existence 50 years ago?)

The 1970s saw the advent of large-scale integration (LSI). The first LSI chips contained thousands of transistors; later, it became possible to place first tens and then hundreds of thousands of transistors on a single chip. LSI technology led to two innovations: embedded computers, which are incorporated into other appliances, such as cameras and TV sets, and microcomputers or personal computers, which can be bought and used by individuals. In 1975, very large scale integration (VLSI) was achieved. As a result, computers today are 100 times smaller than those of the first generation, and a single chip is far more powerful than ENIAC.

Computer environments have changed, with climate-controlled rooms becoming less necessary to ensure reliability; some recent models (especially minicomputers and microcomputers) can be placed almost anywhere.

Large computers, of course, did not disappear just because small computers entered the market. Mainframe manufacturers have continued to develop powerful machines, such as the UNIVAC 1100, the IBM 3080 series, and the supercomputers from Gray.

Countries around the world have been active in the computer industry; few are as renowned for their technology as Japan. The Japanese have long been associated with chip technology, but recently they announced an entirely new direction.

Exercises

 

I. Match words with their definition:

 

1. Magnetic core A. The general-purpose processor-chip.

2. Silicon B. Electronic tube about the size of light

bulb.

3. Vacuum tube C. Nonmetallic crystalline substance.

4. Microprocessor D. Form of primary storage.

5. Reliability E. Heavy and awkward to carry, wear, etc.

6. Cumbersome F. The quality of been trusted; dependable.

 

II. Identify whether the following statements are true or false. Use the model:

 

S. A: Transistors had only one advantage – they consumed less energy. – S. B: No, that is false, because they not only consumed less energy, but they also needed no warm-up time and were faster and more reliable.

 

1. Large computers disappeared just because small computers entered the market.

2. Magnetic core was the secondary form of primary storage technology.

3. In 1957 magnetic disk was introduced as a faster, more compact method of storing data.

4. In the third generation the use of magnetism for data storage was pioneered.

5. The first language to receive widespread acceptance was FORTRAN.

6. The chips had four desirable characteristics: reliability, compactness, low cost, low power use.

7. Turning over control of the computer to the OS made possible models of operation that would have been impossible with manual control.

8. The early generation of computers were used primarily for business data processing applications.

 

Topic for Discussion.

 

If Charles Babbage succeeded in building a mechanical computer around the middle of the 19th century, how might our present use of computers and our attitudes to them be different?

 

 


 

Topic 3.

 

Components (Hardware)

Computer Input

 

Data input devices have been used since the sixties as a graphic user interface (GUI) to allow a user to input certain information into computer systems and to modify or operate on images or information displayed on an output screen attached to the computer system.

Input examples are keyboard; CD ROM's; DVD ROMs; microphone (including speech analysis/recognition); graphic cards; digital video cards; scanners; cameras; camcorders; video devices such as TV and VCR's; sensors.

There has been trend toward the increased uses input technologies that provide a more natural user interface for computer users. You can now enter data and commands directly and easily into a computer system through pointing devices like electronic mice and touch pads, and technologies like optical scanning handwriting recognition, and voice recognition. These developments have made it unnecessary to always record data on paper source documents and then keyboard the data into a computer in an additional data entry step. Further improvement in voice recognition and other technologies should enable an even more natural user interface in the future.

Keyboards are still the most widely used devices for entering data and text into computer system, but they have some serious limitations. For example, persons who are not trained typists find using a keyboard tedious and error producing. That is why researches in such areas as pointing devices, optical character recognition, and speech recognition are seeking ways to eliminate or minimize the use of keyboards.

A computer keyboard has keys for all the characters as well as mathematical and foreign-language symbols. The enter key causes the cursor on the display to move to the beginning of a new line. The escape key, which is usually marked Esc, is often used to cancel data entries and to terminate the execution of commands and programs. The number keys along the top row of the keyboard are inconvenient for entering large amounts of numeric data. For that reason a numeric keypad, in which the number keys are arranged in a square array, as on a calculator is provided.

Holding down the shift key causes the letter keys to type uppercase rather than lowercase letters.

Holding down the control key (Ctrl) causes certain other keys, mainly letter keys, to generate control keys.

The Alt key, when pressed, establishes an alternate meaning for each key.

The Ctrl and Alt keys are often used to indicate that a keystroke represents a command to the program rather than a part of the input data.

Cursor control keys move the cursor on the display up, down, left, and right, allowing the user to position the cursor at the point on the display where data is to be entered or changes are to be made.

Pointing devicesare a better alternative for issuing commands, making choices, and responding to prompts displayed on your video screen. They work with your operating system’s graphical user interface (GUI), which presents you with icons, menus, windows, buttons, bars, and so on, for your selection. For example, pointing devices such as electronic mice and touch pads allow you to easily choose from menu selections and icon displays using point-and-click or point-and-drag methods.

The electronic mouse is the most popular pointing device used to move the cursor on the screen, as well as to issue commands and make icon and menu selections. By moving the mouse on a desktop or pad, you can move the cursor into an icon displayed on the screen. Pressing buttons on the mouse activates various activities represented by the icon selected.

The trackball, pointing stick, and touch pad are other pointing devices most often used in place of the mouse. A trackball is a stationary device related to the mouse. You turn a roller ball with only its top exposed outside its case to move the cursor on the screen. A pointing stick(also called a trackpoint) is a small buttonlike device, and sometimes likened to the eraserhead of a pencil. It is usually centered one row above the space bar of a keyboard. The cursor moves in the direction of the pressure you place on the stick. The touch padis a small rectangular touch-sensitive surface usually placed bellow the keyboard. The cursor moves in the direction your finger moves on the pad. Trackballs, pointing sticks, and touch pads are easier to use than a mouse for portable computer users and are thus built into most notebook computer keyboards.

Touch screens are devices that allow you to use a computer by touching the surface of its video display screen. Some touch screens emit a grid of infrared beams, sound waves, or a slight electric current that is broken when the screen is touched. The computer senses the point in the grid where the break occurs and responds with an appropriate action. For example, you can indicate your selection on a menu display by just touching the screen next to that menu item.


 

Comments:

 

pointing device координатно-вказівний пристрій, позиціювальний

пристрій, вказівний пристрій, маніпулятор

 

prompt запрошення, підказка (на екрані)

 

point-and-click “вказати і клацнути” – загальний метод роботи

з мишею в ОС

 

point-and-drag “вказати і перетягнути” – технологія роботи з

екранними об’єктами у Windows

за допомогою миші

 

to emit випромінювати

 

to occur траплятися, відбуватися

 

tedious нудний, стомливий

 

seek пошук; шукати, розшукувати, намагатись знайти

 

touchpad сенсорная панель тактильное устройство ввода,

применяется для управления

курсором в ноутбуках,

в сенсорном управлении

мобильными телефонами

Exercises

 

Computer Mouse

 

A computer mouse is a handheld device that a user slides over a suitable surface causing the cursor on a computer screen to move in a direction determined by the motion of the device.

Computer mice are often referred to as cursor positioning devices or cursor control devices, although mice are utilized to perform many other functions, such as to launch applications, re-size and move windows, drag, open and drop documents, select icons, text, menu items in a pull-down menu, and others. The mouse permits a computer user to position and move a cursor on a computer screen without having to use a keyboard. The mouse and mouse button allow the user to move a cursor or other pointing device to a specific area of the computer screen and depress the one or more buttons to activate specific computer program functions.

Computer mice can be found in a variety of physical embodiments. Typically, a mouse comprises a body that serves as a grip for the user's hand and as a structure for mounting a movement sensing system and two or more mouse buttons for selecting computer functions.

A computer mouse is ergonomically designed so that a user's hand fits snugly around the device.

Computer mice are available with electro-mechanical, opto-mechanical, or optical movement sensing systems. The electro- and opto-mechanical systems typically incorporate a ball arranged for rotation in the body and protruding from the bottom of the mouse into contact with the surface on which the mouse is resting. Movement of the mouse causes the ball to rotate. Electrical or optical transducers in the body convert the motion of the ball into electrical signals proportionate to the extent of movement of the mouse in x and y directions.

In addition to mechanical types of pointing devices like a conventional mouse, optical pointing devices have also been developed. The optical mouse has several advantages such as its precise detection of a movement of user's hand and its smooth movement, compared with a conventional ball mouse, thus its use increasing more and more. An optical mouse utilizes optical sensors to detect movement of the mouse relative to a surface. The optical mouse has no mechanical moving parts, and may be used on almost any flat surface. They respond more quickly and precisely than electromechanical mice, and are growing in popularity. An optical mouse overcomes at least the dust problem by generating the movement signal of the mouse by means of detection of reflection light. Generally, an optical mouse is operated by reflecting light emitted from the main body of the optical mouse from a touching object, thus, enabling the movement of the mouse on the pad to be detected and enabling a cursor on a computer monitor to be moved. An optical mouse optically recognizes its movement on the touching object, converts the recognized value to an electric signal, transmits the electric signal to the computer, and thereby the position of the cursor on the monitor can be recognized.

Mechanical mouse devices incorporate moving parts that can wear out. The roller ball mechanism also requires periodic cleaning to remove accumulated dirt and to prevent internal malfunctioning. To overcome those drawbacks, an optical mouse is provided and has a light transmitter such as light emitting diode (LED), light receiver such as a photo diode and associated components.

Since such an optical mouse has advantages of the accuracy of its motion detection, and smoothness in its motion, as compared with the conventional ball-type mouse, its use is gradually increasing.

 

Role Play.

You are a shop assistant at the Computer Store. Describe all the advantages of an optical mouse to the customer.

 

Read the text.

Touch screen monitor

 

Portable electronic devices have been designed to perform a large variety of functions, including cellular phone, personal digital assistants (PDAs), video camcorders, digital cameras, and notebook computers. All of these devices store information and perform tasks under direction of the user of the device, and therefore have a user interface. A typical user interface includes a means for displaying information, a means for inputting information and operation control, and often a means for playing sounds and audio signals. As the size of these communication devices decreases and as the number of functions increase, it has become increasingly important for a user to be able to enter commands and information into the communication device in an efficient manner. With a reduction in size of the device, a keypad input device must also be reduced in size, thereby decreasing the efficiency with which information can be input by reducing the number and size of the keys.

Furthermore, with a reduction in size of the device, the display size must also be reduced. Still furthermore, the use of a mouse with such devices is usually not possible since a mouse requires a flat clean surface to be properly used.

The touch screen display is an alternative to fixed buttons. Touch screens are more convenient than conventional computer screens because the user can directly point to an item of interest on the screen instead of having to use a mouse or other pointer. A touch screen allows the user of a terminal to enter a menu selection or data by placing a finger or other object at a location on the display screen that corresponds to the menu item, function or data numeral to be entered.

The touch screens are manufactured as separate devices and mechanically mated to the viewing surfaces of the displays. Touch screens can be activated by many different types of contacts, including by finger, pen, and stylus. The user touches different areas of the touch screen to activate different functions. Touch screens for computer input permit a user to write or draw information on a computer screen or select among various regions of a computer generated display, typically by the user's finger or by a free or tethered stylus.

The use of a touch screen input device that serves both as a display and as an input device for the communication device allows a larger display in that a large keypad is no longer required since many of the functions have been taken over by the use of the display screen as an input device.

The touch screen is made of transparent material. When the touch screen is placed over a video display device, images from the video display device are visible through the touch screen. A touch screen typically involves a cathode ray tube monitor (CRT) or liquid crystal display (LCD) monitor and a transparent touch-sensitive overlay that is attached to the front face of the monitor.

 

Comprehension questions:

 

What does a typical user interface include?

Why has it become increasingly important for a user to be able to enter commands and information into the communication device in an efficient manner?

What is an alternative to fixed buttons?

How can touch screens be activated?

What material is the touch screen made of?


 

Topic 4.

 

Source Data Automation

 

Source data automation, or source data collection, refers to procedures and equipment designed to make the input process more efficient by eliminating the manual entry of data. Instead of a person entering data using a keyboard, source data automation equipment captures data directly from its original form. The original form is called a source document. In addition to making the input process more efficient, source data automation usually results in a higher input accuracy rate.

An image scanner, sometimes called a page scanner, is an input device that can electronically capture an entire page of text or images such as photographs or art work. The scanner converts the text or images on the original document into digital data that can be stored on a disk and processed by the computer. The digitised data can be printed, displayed separately, or merged into another document for editing.

Optical recognition devices use a light source to read codes, marks, and characters and convert them into digital data that can be processed by a computer.

Optical codes use a pattern or symbols to represent data. The most common optical code is the bar code. Most of us are familiar with the “zebra-striped” Universal Product Code (UPC), which appears on most supermarket products. It consists of a set of vertical lines and spaces of different widths. The bar code reader uses the light pattern from the bar code lines to identify the item. The UPC bar code, used for grocery and retail items, can be translated into a ten-digit number that identifies the product manufacturer and product number.

Optical character recognition ( OCR ) devices are scanners that read typewritten, computer-printed, and in some cases hand-printed characters from ordinary documents.

A number of optical character recognition (OCR) systems are known. Typically, such systems comprise apparatus for scanning a page of printed text and performing a character recognition process on a bit-mapped image of the text, which is a pixel -by-pixel representation of the overall image in a binary form. The recognition system reads characters of a character code line by framing and recognizing the characters within the image data. During the recognition process, the document is analyzed for several key factors such as layout, fonts, text and graphics. The document is then converted into an electronic format that can be edited with application software. The output image is then supplied to a computer or other processing device, which performs an OCR algorithm on the scanned image. The document can be of many different languages, forms and features. The purpose of the OCR algorithm is to produce an electronic document comprising a collection of recognized words that are capable of being edited. In general, electronic reading machines using computer-based optical character recognition (OCR) comprises personal computers outfitted with computer scanners, optical character recognition software, and computerized text-to-voice hardware or software.

The OCR devices scan the shape of a character, compare it with a predefined shape stored in memory, and convert the character into the corresponding computer code. They use a light source to read special characters and convert them into electrical signals to be sent to the CPU. The characters can be read by both humans and machines. They are often found on sales tags in department stores or imprinted on credit card slips.

In large department stores you can see a device called the wand reader. It is capable to read OCR optical characters. After data from a retail tag has been read, the computer system can automatically and quickly pull together the information needed for billing purposes.

Wands are an extremely promising alternative to key input because they eliminate one more intermediate step between “data” and “processing” – that of key entry.

Magnetic ink character recognition (MICR) characters use special ink that can be magnetized during processing. MICR is used almost exclusively by the banking industry for processing checks. Blank checks already have the bank code, account number, and check number printed in MICR characters across the bottom. When the check is processed by the bank, the amount of the check is also printed in the lower right corner. Together, this information is read by MICR reader/sorter machines as part of the check-clearing process.

 

Comments:

 

source document початковий документ

 

to capture захоплювати, збирати (дані)

 

recognition розпізнавання

 

pixel елемент зображення, точка растра; мінімальний адресований елемент

двомірного растрового

зображення, колір і яскравість якого

можна задати незалежно

від інших точок

 

bar code штриховий код; спеціальний код,

у якому кожний знак складено

з вертикальних темних

і світлих смуг різної ширини,

який друкують на упаковці

товарів тощо для автоматизованого

вводу даних про них

 

Role Play.

 

You have installed a new scanner at your office. Explain your employees how to use it. Use a specification to the device.

 

Topic 5.

 

Enrolment

Everybody sounds slightly different, so the first step in using a voice recognition system involves reading an article displayed on the screen. This process, called enrolment, takes less than 10 minutes and results in a set of files being created which tell the software how you speak. The enrolment only has to be done once, after which the software can be started as needed. The new pieces of software claim that the enrolment process is even easier then in previous versions.

Dictating and Correcting

 

When talking, people often hesitate, mumble or slur their words. One of the key skills in using voice recognition software is learning how to talk clearly so that the computer can recognize what you are saying. This means planning what to say and then delivering speech in complete phrases or sentences. The voice recognition software will misunderstand some of the words spoken and it is necessary to proofread and then correct your mistakes. Corrections can be made by using the mouse and keyboard or by using your voice. When corrections are made the voice recognition software will adapt and learn, so that the same mistake will not occur again. Accuracy should improve with careful dictation and correction.

Editing and Formatting Text

 

Text can be changed (edited) very easily. The text to be changed can be selected (highlighted) by using commands like “select line”, “select paragraph” and then the changes can be spoken into the computer. These will then replace the selected text.

Typically, voice recognition systems with large vocabularies require training the computer to recognize your voice in order to achieve a high degree of accuracy. Training such systems involves repeating a variety of words and phrases in a training session and using the system extensively. Trained systems regularly achieve a 95 to 99 percent word recognition rate. Training to 95 percent accuracy takes only a few hours.

Two examples of continuous speech recognition software for word processing are Naturally Speaking by Dragon Systems and Via Voice by IBM.

Dragon Naturally Speaking

This program is distributed by Nuance. NaturallySpeaking is recognised as the market leader and is the alternative most frequently recommended by AbilityNet.

IBM ViaVoice

This is also distributed by Nuance. It offers good accuracy, but is not as easy to use as NaturallySpeaking.

Qpointer

Qpointer provides good command and control facilities, but is not so good for writing tasks as is makes more recognition errors. It operates differently to Naturally Speaking and Via Voice.

Minimum requirements are a 133 MHz Pentium class microprocessor, 32 MB of RAM, an industry standard sound card, and 50 MB of available hard disk capacity. The products have 30,000-word vocabularies expandable to 60,000 words, and sell for less than $200.

 

Comments:

dyslexic дислексія, невміння писати

 

accuracy точність; безпомилковість; чіткість зображення

 

to decipher розшифровувати, дешифрувати

 

to convert перетворювати, конвертувати

 

comparator компаратор, порівнювач, блок порівняння;

 

enrolment реєстрація

 

IV. Complete the raw.

 

to recognize – recognition – recognizer

to process – processing – recognizer

to convert – conversation – converter

compare – comparison – comparator

to install –installation– installer

produce – production – producer

to correct –correction– corrector

 

V. Answer the questions.

 

1. What does voice input allow?

2. What is the goal of voice recognition?

3. What are the sounds of a sequence of words like?

4. Why does voice recognition promise to be the easiest method for data entry, word processing, and conversational computing?

5. What speech recognition did early voice recognition products use?

6. What must the computer have to decipher the signal?

7. What makes a voice-recognition program run many times faster?

8. What pace does new continuous speech recognition(CSR) software recognize?

9. What do voice recognition systems do?

10. Why do voice recognition systems with large vocabularies require training the computer?

11. What does training voice recognition systems involve?

12. What are the steps you follow beginning to talk?

13. What are the examples of continuous speech recognition software for word processing?

 

Topic 6

 

The Arithmetic/Logic Unit

The ALU can perform arithmetical operations, comparisons, logical (Boolean) operations, and shifting operations. Comparison operations allow a program to make decisions based on its input data and the results of previous calculations. Logical operations can be used to compute whether a particular statement is true or false. Logical operations are also frequently used for manipulating particular bits within a byte or word. The logical and shifting operations are used together to edit bit patterns – to extract a group of bits from one pattern and insert it at a given position within another bit pattern.

While performing these operations the ALU takes data from the temporary storage area inside the CPU named registers. Registers are a group of cells used for memory addressing, data manipulation and processing. Some of the registers are general purpose and some are reserved for certain functions. It is a high-speed memory, which holds only data for immediate processing and results of this processing. If these results are not needed for the next instruction, they are sent back to the main memory and registers are occupied by the new data used in the next instruction.

 

 

The Control Unit

The control unit directs and controls the activities of the internal and external devices. It interprets the instructions fetched into the computer, determines what data, if any, are needed, where it is stored, where to store the results of the operation, and sends the control signals to the devices involved in the execution of the instructions.

The speed at which a computer can carry out its operations is referred to as its megahertz rate. A CPU that runs at one megahertz would complete one million operations per second. Many of the computers that you use on this campus are in the 200 to 400 megahertz rate. By the end of the year 2002, 2000 megahertz (e.g., 2 gigahertz) rates became available. In line with the trends noted above, ever faster systems can be designed. Because other technologies must work with the CPU, computers which are rated with a higher megahertz rate than others do not necessarily run faster. The best test of a computer's speed is to get out your stop watch and time basic operations such as saving a file or running an animation, then run the same program on a different computer and compare.

It is also possible for many CPU's to share computing responsibilities in one computer. Soon new computer operating systems will force personal computer buyers to also make decisions about how many CPU's they want in their computer, perhaps one for every application. The world’s fastest computer is a supercomputer in Yokohama that runs at over 35 trillion operations per second for the Earth Simulator Project which does climate modeling, merging the work of over 5,000 integrated CPUs. The term supercomputer is a relative term referring to the fastest computers available on the planet in any given year. Nevertheless, the room size computer or computer complex has never really gone away. Today's supercomputers still need much of the space and cooling equipment that the original room sized Eniac computer did.

In spite of their rapid growth in capacity, today's computers based on electrons may one day become the dinosaurs of the early history of computer technology. Think of today's computers as being in the steam engine age of the history of the automobile that came before today's internal combustion engines. The next generation of computers may be based on photons (light beams) and run hundreds of times faster than today's fastest computers. The patents for photon transistors were first filed in Scotland in 1978. One could surmise from this alone that the next fifty years of central processing unit technology will be as dramatic as the last.

Computer chip technology continues to become even more sophisticated, adding more features to the chips than just transistors storing ones or zeros. Companies are now developing three-dimensional structures on the chips at nanotechnology scale. Nanotechnology involves work with molecular or macromolecular levels in the length scale of approximately 1 - 100 nanometer range (1 billionth of a meter is a nanometer). One application of this capacity would be with drug companies seeking to speed up and drop the cost of research. They would like to run entire sets of research lab procedures on chips using microfluidics. Microfluidics refers to moving microscopic liquids on chips. In addition to the standard transistors holding ones and zeros, these chips are built in layers which contain miniature valves, pumps and channels that act within a chip as fluidic circuitry.

Another approach used by computer centers is blade computing. From 20 to 300 cell computers are packed in a rack like blades in a knife rack, hence the name blade computing. This concentrates the number computers that can fit into one space and greatly reduces electrical power and management costs. Link up a cluster of really fast processors and the device becomes a supercomputer. That is, the network is the computer.

Chips

Analog chips

Computer chips, thumbnail size wafers of silicon or other substances, come in two distinct species, digital and analog. Computers communicate within themselves and with other computers using the digital 1's and 0's. To interact with the world around them they need analog chips that can deal with continuous states of states of information and translate back and forth between discrete the analog and digital environments.

Digital Chips

The CPU is one kind of digital chips and it has already been discussed. Memory chips make up the second major part of a computer.

Computer memory contains many digital chips, which contain small switches representing a condition of on or off, or a 1 or a 0. Many different techniques and chemical structures are used to make this concept work. Currently computer "chips" made out of silicon are currently used to manage the state of these switches. Silicon is one common substance used to create computer memory, but it was not the first, and is certainly not the last. For example, serious work is being done on using the chemical structure of proteins to create computer memory. Biochips are in your future.

 

Comments:

 

application застосування, прикладна програма; завершена

прикладна програма або пакет, що забезпечує користувачу розв’язання певної задачі, наприклад, електронна таблиця або текстовий процесор

 

compatible сумісний

 

emulator емулятор; програма, за допомогою якої комп’ютер може виконувати програми,

написані для іншого комп’ютера

 

nanotechnology нанотехнологія; загальний термін для

позначення методів створення

пристроїв, розмірами менш ніж

100 нм., серед яких і нова

елементна база для

комп’ютерів (наноелектроніка)

 

performance продуктивність, ефективність; виконання

 

Student A: Software emulators slow down the performance. – Student B: That is really so, although software emulators allow the CPU to run incompatible programs, they severely slow down the performance.

2) S. A: The more complicated the instruction set is, the faster the CPU works.

– S. B: No, that is wrong. The more complicated the instruction set is, the slower the CPU works.

 

1. The CPU is composed of the control unit and the arithmetic-logical unit only.

2. Three-dimensional structures on the chips at nanotechnology scale are not possible.

3. Most of today’s personal computer CPU’s have a 32 bit byte.

4. Today’s supercomputers do not need much space and cooling equipment.

5. Silicon is the only substance used to create computer memory.

6. Today’s computers based on electrons may one day become the dinosaurs of the early history of computers.

7. Computer memory does not contain many digital chips.

8. All the registers are general purpose.

 

Read the text.

 

We can visualize the central processing unit as a clerk (the control unit) with a calculator (the arithmetic-logic unit). The clerk carries out computations with the aid of the calculator, following a detailed set of instructions (the program) written on a chalkboard(main memory). The clerk also uses the chalkboard for temporary data storage. Values to be saved are copied onto the board from the calculator; when they are needed later in the calculation, they are entered into the calculator from the board. By telephone (the bus), the clerk instructs assistants (drive controllers) to transfer data between filing cabinets (auxiliary memory) and the chalkboard. Since the assistants transfer data directly to and from the chalkboard, rather than going through the clerk for each transfer, their activities correspond to direct memory access. An assistant can also obtain data (input) from those for whom the calculations are being performed and supply them with results (output).

When an assistant finishes an assigned task, he or she telephones the clerk for further instructions (interrupt request). After marking his or her place in the program, the clerk carries out a set of special instructions for answering telephone calls (interrupt handler). These instructions, which are also written on the chalkboard, specify what orders are to be given to the assistant. When the assistant has been properly instructed, the clerk resumes executing the program from the point at which it was interrupted when the telephone rang.

 

Now say with what functions of the CPU the given words can be compared:

 

a clerk

a calculator

a set of instructions

a chalkboard

telephone

assistants

filing cabinets

direct memory access

to obtain data

to supply with results

further instructions

a set of special instructions for answering telephone calls


 

Topic 7

 

Cache Memory

 

The cache memory was introduced as the first attempt at using memories of different speeds. The problem was to increase speed of instruction execution. The analysis of programs showed that in the majority of programs only few variables are used frequently, so only few memory cells are frequently accessed. The solution was to store this frequently used data in a special memory with the higher speed. This type of memory is called a cache memory. For example, on a typical 100-megahertz system board, it takes the CPU as much as 180 nanoseconds to obtain information from main memory, compared to just 45 nanoseconds from cache memory.

When the program is executed, some of the variables are held in the cache memory. The control unit interprets the instruction and looks for the necessary data in the cache memory first. If the data is there, it is processed; otherwise the control unit looks for the data in RAM. A more sophisticated cache memory keeps acount of number of accesses made to each variable. These counts are compared at regular intervals and the most frequently used variables are moved to the cache.

The cache memory system is managed by an 'intelligent' circuit called the cache memory controller. When a cache memory controller retrieves an instruction from RAM, it also takes back the next several instructions to cache. This occurs because there is a high probability that the adjacent instruction will also be needed.

To speed up the computers even more, some CPUs (e.g. 80486 and Pentiums) have got built-in cache memory. In this case, there will be two cache memories: one – built-in and another – external (to the CPU). The built-in cache memory is also referred to as level 1 cacheorL1 or primary cache. It is located inside the CPU. External cache is referred to as level 2 cacheorL2 or secondary cacheand is located on the motherboard. The capacity of built-in cache is between 8 and 32K, depending on the microprocessor. The capacity of external cache ranges in size from 64K to 1M.

When CPU chips do not contain internal cache, the external cache, if present, would actually be the primary (L1) cache. Some secondary caches can be expanded, some cannot.

Some advertisements specify the type of the secondary cache installed as write-back or associative.

Write back cache holds off writing to the hard disk until there is a lull in CPU activity. This gives an advantage in speed but there is a danger that data can be lost if the power fails.

Associative cache describes an alternative architecture to direct mapped memory, and is generally faster than direct mapped cache.

Cache Speed and RAM Speed

In Pentium systems, 20ns cache SRAM is generally used for 50-60MHz system boards (using the Pentium 75/90/100/120), and 15ns cache SRAM is normally utilized for 66MHz system boards (using the Pentium 100/133). Cache SRAM at speeds up to 8ns has recently become available, although rare and expensive.

 

Comments:

 

cache memory кеш-пам’ять; надшвидкодіюча оперативна

память, яка слугує для буферизації команд

і/а



Поделиться:


Последнее изменение этой страницы: 2016-07-11; просмотров: 304; Нарушение авторского права страницы; Мы поможем в написании вашей работы!

infopedia.su Все материалы представленные на сайте исключительно с целью ознакомления читателями и не преследуют коммерческих целей или нарушение авторских прав. Обратная связь - 3.129.247.196 (0.508 с.)