Заглавная страница Избранные статьи Случайная статья Познавательные статьи Новые добавления Обратная связь FAQ Написать работу КАТЕГОРИИ: АрхеологияБиология Генетика География Информатика История Логика Маркетинг Математика Менеджмент Механика Педагогика Религия Социология Технологии Физика Философия Финансы Химия Экология ТОП 10 на сайте Приготовление дезинфицирующих растворов различной концентрацииТехника нижней прямой подачи мяча. Франко-прусская война (причины и последствия) Организация работы процедурного кабинета Смысловое и механическое запоминание, их место и роль в усвоении знаний Коммуникативные барьеры и пути их преодоления Обработка изделий медицинского назначения многократного применения Образцы текста публицистического стиля Четыре типа изменения баланса Задачи с ответами для Всероссийской олимпиады по праву Мы поможем в написании ваших работ! ЗНАЕТЕ ЛИ ВЫ?
Влияние общества на человека
Приготовление дезинфицирующих растворов различной концентрации Практические работы по географии для 6 класса Организация работы процедурного кабинета Изменения в неживой природе осенью Уборка процедурного кабинета Сольфеджио. Все правила по сольфеджио Балочные системы. Определение реакций опор и моментов защемления |
Types of Networks. Neural Networks.Содержание книги
Поиск на нашем сайте
A network is simply two on more computers linked together, It allows users to share not only data files and software applications, but also hardware, like printers and other computer resources such as fax. Most networks link computers within a limited area -within a department, an office, or a building.These are called Local Area Networks, or LANs. But networks can link computers across the world, so yon can share information with someone on the other side of the world as easily as sharing with a person at the next desk. When networks arc linked together in this way, they are called Wide Area Networks, or WANs. Networks increase productivity by allowing workers to share information easily without printing, copying, telephoning, or posting. They also save money by sharing peripherals such as printers. Neural Networks. A neural network is a powerful data modeling tool that is able to capture and represent complex input/output relationships. This tool can come in different forms, specifically it can be hardware based or emulated through special software. The true power and advantage of neural networks lies in their ability to represent both linear and non-linear relationships and in their ability to learn these relationships directly from the data being modeled. Traditional linear models are simply inadequate when it comes to modeling data that contains nonlinear characteristics. Different problems involve different functions. Some of them produce functions that are linear; others require using non-linear functions in order to get the solution. II really helps when one has a formula which describes certain function of interest. Unfortunately this is not always the case when solving more involved problems. That's when approximation comes in handy. The idea is that if we can't have exact function formula at hand then we can use its approximation to calculate necessary values. Neural network is one of the mechanisms of approximating functions with some notable features, Brain analogy Our brains are composed of billions of neurons, each one connected to thousands of other neurons to form a complex network of extraordinary processing power. Artificial neural network attempts to mimic our brain's processing capability, albeit on a far smaller scale. Information is transmitted from one neuron to another via the axon and dendrites. The axon carries the voltage potential, or action potential, from an activated neuron to other connected neurons. The action potential is picked up from receptors in the dendrites. The synaptic gap is where chemical reactions take place, either to excite or inhibit the action potential input to the given neuron. Artificial networks are quite simple by comparison. For many applications artificial neural networks are composed of only a handful, a dozen or so, neurons. This is far simpler tlian our brains. Some specific applications use networks composed of perhaps thousands of neurons, yet even these are simple in comparison to our brains. At this time we can't hope to approach the processing power of the human brain.using our artificial networks; however, for specific problems simple networks can be quite powerful. The problem of simulating human brain has also another aspect. Lack of knowledge concerning processes in our brains prevents us from attempts to simulate it completely as il seems impossible to simulate something that we do not quite understand. Apparently simulating the whole brain with a single even immensely vast network appears to be Sisyphean toil as it will most likely fail. One possible solution is using a whole cascade of neural networks with each one having special functions. This is the biological metaphor for neural networks. It is not completely reflected in the construction of artificial networks but (he main feature is indeed inspired by the brain structure. The idea of combining a number of simple by staicture elements in a network allows achieving incredible results. Network Training. There is one particularly interesting tiling about neural networks. Right after its "birth" net cannot really do anything. In order to make it capable of solving required problem you must teach it by giving it examples of how you want things done. So how are examples presented? They are nothing but pairs of input-output data. You feed network with input data, then get its output and compare it with the output you expect. If expected results are different from what net outputs then you fix some things inside it and continue your experiments. Depending on the complexity of the function you are approximating it will take you different number of "tries" to teach your network. After network is trained you can use it by feeding it data and accepting its output as result that this time doesn't require any corrections. Spheres of Application / Process modeling and control - Creating a neural network model for a physical plant then using that model to determine the best control settings for the plant. V Machine diagnostics - Detect when a machine has failed so that the V Portfolio management - Allocate the assets in a portfolio in a way S Target recognition - Military application which uses video and/or infrared image data to determine if an enemy target is present. / Medical diagnosis - Assisting doctors with their diagnosis by analyzing the reported symptoms and/or image data such as MRls or X-rays. / Credit rating - Automatically assigning a company's or individuals' credit rating based on their financial condition. ^ Targeted marketing - Finding the set of demographics which have the highest response rate for a particular marketing campaign. S Voice recognition - Transcribing spoken words into ASCII text. ^ Financial forecasting - Using the historical data of a security to predict the future movement of that security. ^ Quality control - Attaching a camera or sensor to the end of a production process to automatically inspect for defects. ^ Intelligent searching - An internet search engine that provides the most relevant content and banner ads based on tire users' past behavior. Neural networks no doubt can take and are actually taking our understanding of algorithms and problems solving techniques to a new level. They will even change the type of computers people use as they spurred the idea of neuro-computers - computers made of artificial neurons. Artificial neural networks have already shown their power and will do even more in the next few years. We should be very careful with our expectations (hough. At the very dawn of neural networks a lot of harm was done to them because of unrealistic expectations. We cannot expect that we will soon build intelligent creatures using neural networks because, our knowledge about the thinking processes in our brains is very modest at present. The better we understand ourselves, the better are the chances of a breakthrough in the brain building industry, 5. Artificial Intelligence Expert systems are a class of computer programs that can advise, analyze, design, explain, explore, forecast, from concepts, identify, interpret, justify, learn, manage, monitor, plan, present, retrieve, schedule, test and tutor. Some of these programs have achieved expert levels of performance on the problems for which they were designed. Expert systems are usually developed with the help of human experts who solve specific problems and reveal their thought processes as they proceed, if this process of protocol analysis is successful, the computer program based on this analysis will be able to solve the narrowly defined problems as well as an expert, Experts typically solve problems that are unstructured and ill - defined, usually in a setting that involves diagnosis or planning. They cope with the lack of structure by employing heuristics, w'hich are tire rules of thumb that people use to solve problems when a lack of time or understanding prevents an analysis of all the parameters involved. Likewise, expert systems employ programmed heuristics to solve problems. Experts engage in several different problems - solving activities: identify the problem, process data, generate questions, collect information, establish hypothesis, explore and refine, ask general questions, and make a decision. Expert systems, like human experts can have both deep and surface representations of knowledge. Deep representations arc causal models, categories abstractions and analogies. In such cases, we try to represent an understanding of structure and function. Surface representations are often empirical associations. With surface representations, all the system knows is that an empirical association exists; it is unable to explain why, beyond repeating the association. Systems that use knowledge represented in different forms have been termed multilevel systems. Work is just beginning in building such multilevel systems, and they will be a major research topic for this decade. Work needs to be done in studying and representing in a general way the different problem - solving activities an expert does. When you build expert systems, you realize that power behind them is that they provide a regimen (керування): for experts to ciystallize and codify their knowledge and in the knowledge lies the power. 6. Digit that Means Nothing The introduction of the zero to the European mathematics was an essential contribution to modem teclmological development. The concept of symbolically representing "nothing" in a numerical system is considered to be one of man's greatest intellectual achievements. Various peoples throughout the world have used systems of counting without having the zero. The classical Greeks used different letters of their alphabet to denote numbers from 1 to 10 and each of the multiples of 10. Any number not represented by a single letter symbol was expressed by the sum of flic values of several symbols. For example, the number 238 was indicated by writing the letter symbols for 200, 30 and 8 adjacent to each other. The Romans used fewer symbols to represent a more limited number of integers such as 1, 5, 10, 50, 100, 500, 1000 and employed the additive principle to a greater degree. Thus, in writing the number 238 nine individual symbols were required: CCXXXVIII. The zero of modern civilization had its origin in India about 500 A.D. By 800 A.D, its use had been introduced to Baghdad, from where it spread throughout the Moslem world. The zero, together with the rest of our "Arabic" numbers was known in Europe by the year of 1000 A.D., but because of the strong tradition of Roman numbers, there was considerable resistance to its adoption. The zero became generally used in Western Europe only in the XIV century. Including the Hindu the concept of the zero with its idea of positional value appears to have been independently arrived at in three great cultures which were widely separated in space and time. About 500 B.C. the Babylonians began to use a symbol to represent a vacant space in their positional value numbers. However, before the idea could be disseminated to other areas, its use apparently died out about 2000 years ago along with the culture that gave it birth. The Mayas of Central America began using the zero about the beginning of the Christian era. They have been in possession of the zero for about a thousand years longer than the Spaniards, and in general, the Mayas were more advanced in many aspects of mathematics than their conquerors, Modern civilization derives incalculable practical and theoretical benefits from the use of zero. 7. Types of Error System errors affect the computer or its peripherals. For example, you might have written a program which needs access to a printer. If there is no printer present when you run the program the computer will produce a system error message. Sometimes a system error makes the computer stop working altogether and you will have to restart the computer. A sensible way of avoiding system errors is to write code to check that peripherals are present before any data is sent to it. Then the computer would warn you by a simple message on the screen, like 'printer is not ready or available'. Syntax errors are mistakes in the programming language (like typing PRN1T instead of PRINT). Syntax errors cause the program to fail. Some translator programs won't accept any line that has syntax errors. Some only report a syntax error when they run the program. Some languages also contain special commands such as debug, which will report structural errors in a program. The programming manual for the particular language you're using will give details of what each error message means. Logic errors are much more difficult to detect than syntax errors, This is because a program containing logic errors will run, but it won't work properly. For example, you might write a program to clear the screen and then print 'hello'. Here is a code for this: 10 Message 30 CLS 20 PRINT 'Hello' 40 END. The code has a logic error in it, but the syntax is right so it will ran. You can get rid of logic errors from simple programs by 'hand-testing' them or doing a 'dry ran' which means working through each line of the program on paper to make sure it does what you want it to do. You should do this long before you type in the code. 8. The Basic Principles of Programming Introduction. The purpose of this chapter is to introduce the student to the fundamental principles of coding and programming. These principles are connected with the stages of programming, the flow-charting, using the subroutines and the computer manual, etc. In order to leave students free to concentrate on these principles, the four-address format, with a minimum of instruction types, is utilized. However, it should be pointed out that the four-address format is used in this chapter for pedagogical reasons only. In practice commercially available computers use only three-, two-, or one-address formats, the latter perhaps are being the most common. The Terms "Coding" and "Programming" are often used as synonyms. However, a code is more specifically a short list of instructions that direct the computer to perform only a part of the entire calculations, whereas the term "program" refers to the complete list of instructions used for the problem as well as the writing of the instruction lists, or codes, whereas "coding" is usually limited in meaning to the writing of the instruction lists. Sometimes a code is called a routine. Stages in Programming. There are five stages in programming. First, the computations to be performed must be clearly and precisely defined. The over-all plan of the computations is diagramed by means of a so-called flowchart. The second stage is the actual coding. It is often best to write a code; in terms of a symbolic language first, for then changes are easily made. Numbers are assigned to the symbols, and the final code is prepared. In tire third stage some procedure is used to get the code into the memory of the computer. The fourth stage consists of debugging the code, detecting and correcting any errors. The fifth and final stage involves running the code on the computer and tabulating the results. It is well known that a single error in one instruction invalidates the entire code. Hence, programming is a technique requiring attention to details without loosing sight of the over-all plan. Instruction Format Some bits of the instruction are set a side for the operation code designation; they tell the instruction is "add", "divide", etc. The rest of the bits usually defines the four addresses. For the more usual operations that involve two operands, such as addition, multiplication, etc, two of the addresses are the addresses of operands. The third address tells where the result is to be put; the fourth address tells where to obtain the next instruction. So, the instruction format is the way in which the different digits are allocated to represent specific functions. Octal Shorthand The first important detail of coding is the fact that the actual bits in an instruction are not written out in the binary code; rather, some shorthand is written instead, i.e., the octal equivalent would be written out. In other words, two octal numbers represent the instruction, and each address would be represented by three octal numbers. Thus, if 101-011 is the binary code for the command "add" then the instruction that says, "Add the contents of address 011.010 11.0 to the contents of address 011 100 101, put the result into address 011 110 100, and take the next instruction from address 100 000 001," is written in octal notation as operation-53, the first operand addrcss-326, the ■second operand address-345, the third address-364, and the fifth address-401. In such cases it evidently facilitates matters to call address in the memory by their octal numbers. Also, numerical quantities will be written on the code sheet in octal (i.e., they will have to be converted from decimal to octal before being written on the code sheet). The Computer Manual. For (he computer we must have; a computer manual that gives the operation codes of different instructions and also defines precisely the meaning of the address for each instructions type. The coding manual must always be at the coder's side. Two further observations must be reemphasized first, when a word is called into the arithmetic/logic unit from the memory, it is not erased from its memory address, but remains there also; second, when a word is put into a memory address it replaces the previous contents of this address, i.e., it erases what had been there. 9. Mineral Industry Software The structure of the mining industry continues to change, and the use of computers in the industry also changes. Computer software that has been traditionally developed and applied to ore reserve estimation and mine design is being increasingly applied to similar needs recognised in the environmental industry. Instead of calculating grades of a mineral, the 3D design software is being used to calculate contaminant plumes. Software used for designing an open pit mine is now also required to produce a final topography when mining is completed. Some packages have been able to do final topography for years, but capabilities and usability are increasing and the trend is toward more interactive graphics, 3D representation and rendering with colour fill and shading. The use of AutoCAD is a strong force in the mineral industry, and several solid user groups are beginning to appear. The strength of these groups is the ability to set and define standards which they expect in AutoCAD application software. Most of the traditional mining packages still maintain their own graphics systems and many are doing quite a good job. The influence of AutoCAD and CAD system is recognised and most vendors offer the capability of exporting files to DXF format (Drawing exchange Format) -even though this is not always a comfortable solution to the interactive graphics issue. The application of computers in minerals and the earth science industry is broadening in general, and there is a wealth of software for all levels of computers available to the industry. Even as the industry fluctuates and activities move to various parts of the world, we see the computer use trend is continuing and broadening in scope. For more than 100 years, earth science professionals have been using contour maps to graphically represent why minerals are where they are, and to quantify how much mineralisation is present. In many cases, the failure or success of a prospect can be directly correlated with how the contour maps were made. Today, many earth science professionals are using microcomputers and contouring software to aid their mapping operations. These new tools are: inexpensive; easy to use; can quickly process simple and complex data sets; offer many different ways to perform spatial analyses; and can generate prcsentation-quality maps. More importantly, these tools offer full, interactive control over all phases of the map construction and interpretation process, from creating and managing the database, to posting data on base maps, modeling spatial data distributions using a variety of spatial estimation techniques, and customising map appearance.
|
||||
Последнее изменение этой страницы: 2017-02-10; просмотров: 223; Нарушение авторского права страницы; Мы поможем в написании вашей работы! infopedia.su Все материалы представленные на сайте исключительно с целью ознакомления читателями и не преследуют коммерческих целей или нарушение авторских прав. Обратная связь - 3.137.177.204 (0.007 с.) |