ЗНАЕТЕ ЛИ ВЫ?

Unit 9. Staying Legal in Cyberspace



When it comes to the law it’s just not true that in cyberspace no one can hear your scream. Companies will have to walk a digital tightrope to ensure that their Web sites do not breach the laws of any country where they want to sell electronically.

Selling goods and services over the Internet is about to become a cheap and efficient way for many companies to reach a large group of potential customers. But those suppliers wanting to use the Internet as a way to attract customers from different countries face a vast array of different laws and regulations, from copyright and trade marks to advertising standards and even what constitutes decency. The idea of a “one size fits all” Web site may not be an option.

Online suppliers who have no interest in building long-term business relationships can probably avoid acting within the law. After making some quick money, they can easily shut down their business in one obscure jurisdiction which has little regard for enforcement of the law and transfer to another.

However, for the vast majority of suppliers the problem of Internet law should be a major concern. The fact is that a company Web page should not breach laws in any country of the world.

A directive from the EU Parliament should result in common laws throughout the European Union.

Once electronic orders start rolling in there is the question, of how different countries treat a transnational order. Who is liable and for how much, if security measures don’t work and money or goods are lost or stolen?

Could the product being supplied cause personal injury or death? If so liability for product defects could lead to huge law suits, particularly in some jurisdictions like the US. By selling over the Internet and agreeing to take orders from certain countries the companies may well be moving into new markets and need to increase insurance covers.

However, if the Web site makes it clear that all orders can be rejected by a supplier then it can reserve the right to keep out of certain countries if it thinks the legal exposure would be too high.

There can also be problems with the formation of contracts. Under English law, for example, there is no need for a contract to be signed to be valid. Agreements which are legally enforceable are made all the time by telephone or in meetings or by fax. E-mail is no different.

Clearly as with all other contracts a supplier needs to make sure its standard terms and conditions of sale are sent by some means to the buyer before the contract is made, but in some ways this is easier electronically than it is by fax. With faxes, sales departments can all too easily forget to fax the reverse of order confirmation forms where the all important conditions appear.

Companies need to consider where their main customers are located. If they are in a country where a contract must be signed to be valid, and local law overrides any law set out in standard terms, then the Internet may be a useless tool to form binding contracts.

Unit 10. Fiber Optic Cable

Very thin transparent fibers have been developed that will eventually replace the twisted-pair copper wire traditionally used in the telephone system. These hairlike fiber optic cables carry data faster and are lighter and less expensive than their copper-wire counterparts. Twisted-pair wire and coaxial cable carry data as electrical signals. Fiber optic cable carries data as laser-generated light beams.

The differences between the data transmission rates of copper wire and fiber optic cable are tremendous. In the time it takes to transmit a single page of Webster’s Unabridged Dictionary over twisted-pair copper wire (about 6 seconds), the entire dictionary could be transmitted over a single fiber optic cable.

Another of the many advantages of fiber optic cable is its contribution to data security. It is much more difficult for a computer criminal to intercept a signal sent over fiber optic cable (via a beam of light) than it is over copper wire (an electrical signal).

Fiber optic technology has opened the door for some very interesting domestic applications. The high-capacity cable will service our telephone, our TV, and our PC. Fiber optic cable will unable us to see the party on the other end of the telephone conversation. As for TV viewing, we will be able to choose from hundreds of movies, including current releases, and we will be able to choose when we watch them. In the PC world, tapping into an information network will be an increasingly visual experience, with plenty of high-resolution color graphics. For example, instead of reading a buying service’s product description, we’ll be able to view a photo-quality display of it. However, we may need to wait a few years to enjoy these services. The expense of fiber optic cable may delay its wide-spread implementation in the home.

Texts for independent class and home translation

Text 1. Computer science

Computer science is a field of study that deals with the structure, operation, and application of computers and computer systems.

Computer science includes engineering activities, such as the design of computers and of the hardware and software of computer systems, and theoretical, mathematical activities, such as the analysis of algorithms and performance studies of systems. It also involves experimentation with new computer systems and their potential applications.

Computer science was established as a discipline in the early 1960s. Its roots lie mainly in the fields of mathematics (e.g., Boolean algebra) and electrical engineering (e.g., circuit design). The major subdisciplines of computer science are (1) architecture (the design and study of computer systems), an area that overlaps extensively with computer engineering; (2) software, including such topics as software engineering, programming languages, operating systems, information systems and databases, artificial intelligence, and computer graphics; and (3) theory, including computational methods and numerical analysis as well as data structures and algorithms.

Text 2. Computer

Computeris any of various automatic electronic devices that solve problems by processing data according to a prescribed sequence of instructions. Such devices are of three general types: analog, digital, and hybrid. They differ from one another in terms of operating principle, equipment design and application.

The analog computer operates on data represented by continuously variable quantities, such as angular positions or voltages, and provides a physical analogy of the mathematical problem to be solved. Capable of solving ordinary differential equations, it is well suited for use in systems engineering, particularly for implementing real-time simulated models of processes and equipment. Another common application is the analysis of networks, such as those for electric-power distribution.

Unlike the analog computer, which operates on continuous variables, the digital computer works with data in discrete form — i.e., expressed directly as the digits of the binary code. It counts, lists, compares, and rearranges these binary digits, or bits, of data in accordance with very detailed program instructions stored within its memory. The results of these arithmetic and logic operations are translated into characters, numbers, and symbols that can be readily understood by the human operator or into signals intelligible to a machine controlled by the computer. Digital computers can be programmed to perform a host of varied tasks. As a consequence, more than 90 percent of the computers in use today are of this type. Government and business make extensive use of the digital computer's ability to organize, store, and retrieve information by setting up huge data files. Its capacity to adjust the performance of systems or devices without human intervention also lends itself to many applications. For example, the digital computer is used to control various manufacturing operations, machine tools, and complex laboratory and hospital instruments. The same capability has been exploited to automate the operational systems of high-performance aircraft and spacecraft. Among the multitude of other significant applications of the digital computer are its use as a teaching aid (e.g., in the remedial instruction of basic language and mathematics skills) and its employment in scientific research to analyze data and generate mathematical models.

The hybrid computer combines the characteristics and advantages of analog and digital systems; it offers greater precision than the former and more control capability than the latter. Equipped with special conversion devices, it utilizes both analog and discrete representation of data. In recent years hybrid systems have been used in simulation studies of nuclear-power plants, guided-missile systems, and spacecraft, in which a close representation of a dynamic system is essential.

Mechanical analog and digital computing devices date back to the 17th century. A logarithmic calculating device, which was the precursor of the slide rule and is often regarded as the first successful analog device, was developed in 1620 by Edmund Gunter, an English mathematician. The first mechanical digital calculating machine was built in 1642 by the French scientist-philosopher Blaise Pascal. During the ensuing centuries, the ideas and inventions of many mathematicians, scientists, and engineers paved the way for the development of the modern computer.

The direct forerunners of present-day analog and digital systems emerged about 1940. John V.Atanasoff built the first electronic digital computer in 1939. Howard Aiken's fully automatic large-scale calculator using standard machine components was completed in 1944. J. Presper Eckert and John W. Mauchly completed the first programmed general-purpose electronic digital computer in 1946. The first stored-program computers were introduced in the late 1940s, and subsequent computers have increasingly become faster and more powerful.

Text 3. Software

Computer programs, the software that is becoming an ever-larger part of the computer system, are growing more and more complicated, requiring teams of programmers and years of effort to develop. As a consequence, a new subdiscipline, software engineering, has arisen. The development of a large piece of software is perceived as an engineering task, to be approached with the same care as the construction of a skyscraper, for example, and with the same attention to cost, reliability, and maintainability of the final product. The software-engineering process is usually described as consisting of several phases, variously defined but in general consisting of: (1) identification and analysis of user requirements, (2) development of system specifications (both hardware and software), (3) software design (perhaps at several successively more detailed levels), (4) implementation (actual coding), (5) testing, and (6) maintenance.

Even with such an engineering discipline in place, the software-development process is expensive and time-consuming. Since the early 1980s, increasingly sophisticated tools have been built to aid the software developer and to automate as much as possible the development process. Such computer-aided software engineering (CASE) tools span a wide range of types, from those that carry out the task of routine coding when given an appropriately detailed design in some specification language to those that incorporate an expert system to enforce design rules and eliminate software defects prior to the coding phase.

As the size and complexity of software has grown, the concept of reuse has become increasingly important in software engineering, since it is clear that extensive new software cannot be created cheaply and rapidly without incorporating existing program modules (subroutines, or pieces of computer code). One of the attractive aspects of object-oriented programming is that code written in terms of objects is readily reused. As with other aspects of computer systems, reliability — usually rather vaguely defined as the likelihood of a system to operate correctly over a reasonably long period of time is a key goal of the finished software product. Sophisticated techniques for testing software have therefore been designed. For example, a large software product might be deliberately "seeded" with artificial faults, or "bugs"; if they are all discovered through testing, there is a high probability that most actual faults likely to cause computational errors have been discovered as well. The need for better trained software engineers has led to the development of educational programs in which software engineering is either a specialization within computer science or a separate program. The recommendation that software engineers, like other engineers, be licensed or certified is gaining increasing support, as is the momentum toward the accreditation of software engineering degree programs.

Text 4. Expert system

An expert system is a computer program that uses artificial intelligence to solve problems within a specialized domain that ordinarily requires human expertise. The first expert system was developed in 1965 by Edward Feigenbaum and Joshua Lederberg of Stanford University in California, U.S. Dendral, as their expert system was later known, was designed to analyze chemical compounds. Expert systems now have commercial applications in fields as diverse as medical diagnosis, petroleum engineering, and financial investing.

In order to accomplish feats of apparent intelligence, an expert system relies on two components: a knowledge base and an inference engine. A knowledge base is an organized collection of facts about the system's domain. An inference engine interprets and evaluates the facts in the knowledge base in order to provide an answer. Typical tasks for expert systems involve classification, diagnosis, monitoring, design, scheduling, and planning for specialized endeavours.

Facts for a knowledge base must be acquired from human experts through interviews and observations. This knowledge is then usually represented in the form of "if-then" rules (production rules): "If some condition is true, then the following inference can be made (or some action taken)." The knowledge base of a major expert system includes thousands of rules. A probability factor is often attached to the conclusion of each production rule, because the conclusion is not a certainty. For example, a system for the diagnosis of eye diseases might indicate, based on information supplied to it, a 90 percent probability that a person has glaucoma, and it might also list conclusions with lower probabilities. An expert system may display the sequence of rules through which it arrived at its conclusion; tracing this flow helps the user to appraise the credibility of its recommendation and is useful as a learning tool for students.

Human experts frequently employ heuristic rules, or "rules of thumb," in addition to simple production rules. For example, a credit manager might know that an applicant with a poor credit history, but a clean record since acquiring a new job, might actually be a good credit risk. Expert systems have incorporated such heuristic rules and increasingly have the ability to learn from experience. Nevertheless, expert systems remain supplements, rather than replacements, for human experts.

Text 5. Computer memory

Computer memory is a physical device that is used to store such information as data or programs (sequences of instructions) on a temporary or permanent basis for use in an electronic digital computer. The memory of a typical digital computer retains information of this sort in the form of the digits 0 and 1 of the binary code. It contains numerous individual storage cells, each of which is capable of holding one such binary digit (or "bit") when placed in either of two stable electronic, magnetic, or physical states corresponding to 0 and 1. The main memories of digital computers usually operate by means of transistor circuits. In these electronic circuits, binary digits are represented as states of electric charge — on or off, closed or open, conducting or nonconducting, resistive or nonresistive that can be held, detected, and changed for purposes of storing or manipulating the data represented by the digits.

Most digital computer systems have two levels of memory — the main memory and one or more auxiliary storage units. Besides the main memory, other units of the computer (e.g., the control unit, arithmetic-logic unit [ALU], and input/output units) also use transistor circuits to store electronic signals.

The flow of electric current through the transistors in memory units is controlled by semiconductor materials. Semiconductor memories utilizing very-large-scale integration (VLSI) circuitry are extensively used in all digital computers because of their low cost and compactness. Composed of one or more silicon chips only about a quarter of an inch in size, they contain several million microelectronic circuits, each of which stores a binary digit. Semiconductor memories provide great storage capacity but are volatile—i.e., they lose their contents if the power supply is cut off.

A special type of transistor circuit for temporary storage of a binary digit is called a flip-flop. A single flip-flop consists of four or a few more transistors. Once a flip-flop stores a binary digit 0 or 1, it keeps that digit until it is rewritten to 1 or 0, respectively. A set of flip-flops that temporarily stores a program instruction (or two or three instructions in the case of certain types of computers) or a number (as in a computational result) is called a register. Numerous flip-flops and registers are used not only in the memory unit but in the ALU and control unit as well.

Main memory.The memory unit of a digital computer typically has a main (or primary) memory, cache, and secondary (or auxiliary) memory. The main memory holds data and instructions for immediate use by the computer's ALU. It receives this information from an input device or an auxiliary storage unit. In most cases, the main memory is a high-speed random-access memory (RAM) — i.e., a memory in which specific contents can be accessed (read or written) directly in a very short time regardless of the sequence (and hence location) in which they were recorded. Two types of main memory are possible with random-access circuits—static random-access memory (SRAM) and dynamic random-access memory (DRAM). A single memory chip is made up of several million memory cells. In a SRAM chip, each memory cell consists of a single flip-flop (for storing the binary digits 1 or 0) and a few more transistors (for reading or writing operation). In a DRAM chip, each memory cell consists of a capacitor (rather than a flip-flop) and a single transistor. When a capacitor is electrically charged, it is said to store the binary digit 1, and when discharged, it represents 0; these changes are controlled by the transistor. Because it has fewer components, DRAM requires a smaller area on a chip than does SRAM, and hence a DRAM chip can have a greater memory capacity, though its access time is slower than that of SRAM.

The cache is a SRAM-based memory of small capacity that has faster access time than the main memory and that temporarily stores data and part of a program for quicker processing by the ALU.

Auxiliary, or secondary, memory.Auxiliary storage units are an integral part of a computer's peripheral equipment. They can store substantially more information than can a main memory but operate at slow speeds. The most common forms of secondary storage are magnetic disk or tape.

Magnetic disks are flat, circular plates coated with a magnetic material. There are two types: hard disks, which are made of aluminum or glass and are physically rigid; and floppy disks, which are made of plastic and are flexible. Both types of disks come in diameters of 3.5 and 5.25 inches (9 and 13 cm). Hard disks that can store anywhere from 20 megabytes to 2 gigabytes (20 million to 2 billion bytes, or small groups of adjacent binary digits constituting a subunit of information) are readily available for desktop computers, and still larger ones can be had. Floppy disks have a much smaller capacity of only two to three megabytes. In both types of disk, data on their surfaces is arranged in concentric tracks. A tiny magnet, called a magnetic head, writes a binary digit (1 or 0) by magnetizing a tiny spot on a disk in different directions and reads digits by detecting the magnetization direction of the spots. A magnetic-disk drive is an assembly of one or more disks, magnetic heads, and a mechanical device for rotating the disks for reading or writing purposes.

Magnetic tapes are also sometimes used in auxiliary storage units. They have an even greater memory capacity than disks, but their access time is far slower because they are sequential-access memories — i.e., ones in which data in consecutive addresses are sequentially read or written as a tape is unwound. Magnetic disks are partly random-accessed (because a magnetic head for reading or writing goes to a desired circular track) and partly sequential-accessed (because data is read or written sequentially from that track as the disk rotates).

Hard disks are routinely used for storing current records and applications software on personal and other small computers. Larger computers may use RAID (redundant array of inexpensive drives), which consists of a group of hard-disk drives that work together as one disk drive. A typical RAID consists of five or more drives with 3.5-inch or 5.25-inch hard disks; this array yields reasonably high access speeds and is more reliable yet less expensive than a traditional single drive with large hard disks. RAIDs are widely used with mainframe computers that require auxiliary memory of very large capacity.





Последнее изменение этой страницы: 2016-06-22; Нарушение авторского права страницы

infopedia.su Все материалы представленные на сайте исключительно с целью ознакомления читателями и не преследуют коммерческих целей или нарушение авторских прав. Обратная связь - 3.238.190.82 (0.015 с.)