Application of neural network in electrical engineering ppt

Finding the optimal set is often a tradeoff between computation time and minimizing the network error. Threshold u The threshold is referred to as a bias value. In this case, the real number is added to the weighted sum. Activation Function f The activation function for the original McCulloch-Pitts neuron was the unit stepfunction. However, the artificial neuron model has been expanded to include other functions such as the sigmoid, piecewise linear, and Gaussian. Different models of artificial neuron 1. Adaline model 2.

Madaline model 3. Rosenballet model 4. Mcculloch pits model 5. Widrow hoff model 6. Kohonen model 1. The adaptive linear element Adaline In a simple physical implementation. Although the adaptive process is here exemplified in a case when there is only one output, it may be clear that a system with many parallel outputs is directly implementable by multiple units of the above kind. Mcculloch pitts model The early model of an artificial neuron is introduced by Warren McCulloch and Walter Pitts in The McCulloch-Pitts neural model is also known as linear threshold gate.

Definition of Neural Networks:

It is a neuron of a set of inputs and one output The linear threshold gate simply classifies the set of inputs into two different classes. Thus the output is binary. Such a function can be described mathematically using these equations: Classical Activation functions While it is possible to define some arbitrary, cost function, frequentlya particular cost will be used, either because it has desirable properties such as convexity or because it arises naturally from a particular formulation of the problem e. Ultimately, the cost function will depend on the desired task.

Different types of activation functions can be used and three of them are described in [activation functions]. The most commonly used is nonlinear sigmoid activation functions such as the logistic function. A logistic function assumes a continuous range of values form 0 and 1 in contrary to the discrete threshold function. A binary threshold function was used in the first model of an artificial neuron back in , the so-called McCulloch-Pitts model.

Threshold functions goes by many names, e. Common for all is that they produce one of two scalar output values usually 1 and -1 or 0 and 1 depending on the value of the threshold. Another type of activation function is the linear function or some times called the identity function since the activation is just the input. In general if the task is to approximate some function then the output nodes are linear and if the task is classification then sigmoidal output nodes are used. Knowledge is acquired by the network from its.

Parallel processing 3. Fault tolerance 4. Self organising 5. Ability to generalize 6. Complete computability 7. Continuous adaptability. The usual process of learning involves three tasks: Compute output s. Compare outputs with desired patterns and feed-back the error. Adjust the weights and repeat the process 4. The learning process starts by setting the weights by some rules. The difference between the actual output y and the desired output z is called error delta. The objective is to minimize delta error to zero. The reduction in error is done by changing the weights 1.

Supervised learning-: These input-output pairs can be provided by an external teacher, or by the system which contains the neural network selfsupervised. Unsupervised learning -: In this paradigm the system is supposed to discover statistically salient features of. Unlike the supervised learning paradigm, there is no a priori set of categories into which the patterns are to be classified; rather the system must develop its own representation of the input stimuli.

Reinforcement Learning -: This type of learning may be considered as an intermediate form of the above two types of learning. Here the learning machine does some action on the environment and gets a feedback response from the environment. The learning system grades its action good rewarding or bad punishable based on the environmental response and accordingly adjusts its parameters. Generally, parameter adjustment is continued until an equilibrium state occurs, following which there will be no more changes in its parameters. The selforganizing neural learning may be categorized under this type of learning.

Advantages 1. Adapt to unknown situations 2. Autonomous learning and generalization. Fuzzy logic is a type of logic that recognizes more than simple true and false values, hence better simulating the real world. Hence, it takes into account concepts like -usually, somewhat, and sometimes. Fuzzy logic and neural networks have been integrated for uses as diverse as automotive engineering,. In recent years, data from neurobiological experiments have made it increasingly clear that biological neural networks, which communicate through pulses, use the timing of the pulses to transmit information and perform computation.

This realization has stimulated significant research on pulsed neural networks, including theoretical analyses and model development, neurobiological modeling, and hardware implementation. Some networks have been hardcoded into chips or analog devices?

The primary benefit of directly encoding neural networks onto chips or specialized analog devices is SPEED! NN hardware currently runs in a few niche areas, such as those areas where very high performance is required e. Many NNs today use less than neurons and only need occasional training. In these situations, software simulation is usually found sufficient. When NN algorithms develop to the point where useful things can be done with 's of neurons and 's of synapses, high performance NN hardware will become essential for practical operation.

All current NN technologies will most likely be vastly improved upon in the future. Everything from handwriting and speech recognition to stock market prediction will become more sophisticated as researchers develop better training methods and network architectures. Although neural networks do seem to be able to solve many problems, we must put our exuberance in check sometimes? Overconfidence in neural networks can result in costly mistakes: Improved stock prediction c. Common usage of self-driving cars d. Composition of music e. Handwritten documents to be automatically.

Trends found in the human genome to aid in. Self-diagnosis of medical problems using. Seminar Report on "neural network and their applications" Uploaded by Vivek Yadav. I am extremely thankful to them for providing me with vital information about.

Flag for inappropriate content. Jump to Page. Search inside document. Introduction………………………………………………… ……………… 6 Biological neuron…………………………………………………………. Artificial neuron………………………………………………………… ….

Artificial Neural Networks (ANN) and Different Types

Different models of artificial activation neural neuron……………………………. Classical 5. Artificial function…………………………………………. Qualities of architecture neural of network……………………………………………17 7. Different ANN…………………………………………19 8. Learning ofANN………………………………………………………… ……20 9. Applications of advances in field of ANN…………………………………………………………23 Recent ANN………………………………………. Conclusion…………………………………………………… …………………27 Each neural network has an input layer, an output layer, and a number of hidden layers.

The latter compute complicated associations between patterns, and the propagation takes place in a feed-forward manner, from the input layer to the output layer. Architecture of a back-propagation neural network. Associated with each connection between two units is a numerical value Wij, which represents the weight of that connection[8]: These weights are modified during the training of the neu- ral network in an iterative process. When the iterative process has converged, the collection of connection weights captures and stores the information present in the example used in its training.

This initiates the feed-forward process. Schematic representation of processing within an artificial neuron. Both the function and its derivatives are continuous everywhere. Artificial Neural Networks in Structural Engineering These activation values are the output of the neural computations.

The training of a back-propagation neural network can be envisaged as fel- lows: The error of each output node is then computed from the difference between the computed output and the desired output. The second stage involves adjusting the weights in the hid- den and output layers in order to minimize the difference between the actual and desired output.

This training is classified as a supervised training algorithm[6]. Thus, the network learns how to respond to patterns of data presented to it. Dif- ferent learning algorithms exist for training neural networks.

Artificial neural networks

The most com- monly used one is the Delta rule[9] or back-error propagation algorithm , which can be applied to adjust connections weights so that the network can predict un- known examples correctly. The behavior of an ANN depends on its topology, the weighting system used, and the activation function[9].

The choice of the activation function is based on the types on input and outputs desired and on the learning algorithm to be used. In addition , there is no direct way of determining the most appropriate number of nodes processing units to include in each hidden layer. The choice of the number of layers, the size of each layer, and the way each layer is to be connect- ed is left to the developer to experiment with.

Increasing the number of hidden layer nodes makes the network more powerful, but the training time and opera- tion of the network will be increased.

Seminar Report on "neural network and their applications"

A general rule is to start with a simple network with one hidden layer, and then increasing the hidden layers and ob- serve the performance of the network until the best architecture of the network is reached. ANN could be developed using special development tools. For example, sever- al software programs are spreadsheet-based. Other software tools are designed to work with expert systems as hybrid development tools. Also, there are some neu- ral network shells which exist for commercial use.

The advantage of these shells is that programming experience is not required and programming time is reduced. For example NeuroShell[10] is an ANN development environment based on back propagation and is best for stand-alone systems. References[10 and 11] list some of the used ANN development systems. ANN can also be implemented directly as semiconductor circuits, or using electro-optical technology[4]. The example is included here for clarification purpose. This ex- ample involves the computation of the deflection of a cantilever beam subjected to a point load as shown in Fig.


  • dragon ball z iphone 5 case amazon.
  • gravity falls games for android.
  • Artificial neural networks | List of High Impact Articles | PPts | Journals | Videos.
  • free download racing games for samsung galaxy y gt-s5360.

The input values were normalized to be in the range The output layer contains one single node which represent the normalized deflection. The hidden layer contains 3 nodes. The normalized deflection was used as the de- sired output value. Sample training patterns, testing data, and the comparison of the theoretical and predicted deflections can be found in reference[12].

Neural Networks

Example application[10]. Structural Engineering Applications Research on the application of ANNs to civil engineering problems is grow- ing rapidly. The use of ANNs in structural engineering has evolved as a new computing paradigm, even though still very limited. It has been applied for dif- ferent areas such as: In the following section some prototype applications in structural engineering are briefly described.

They developed a multi-layer feedforward network model for the ini- tial design of reinforced concrete rectangular single-span beams. They used two hidden layers and 8 nodes in the input layer. One node each was selected for span, dead load, and live load. For type of steel, one node each was assigned for the three grades of steel. Similarly, for the two possible grades of concrete, M- 15 and M, two nodes were provided.

The network has been tested to predict the probable values of different beam parameters for new examples. Abdallah and Stavroulakis[14] developed a back-propagation neural network model that uses experimental steel connections data for an estimation of the me- chanical behavior of semi-rigid steel connections. Two types of steel structure connections have been considered: The design variables used for the train- ing and testing of the neural network are: These design variables were taken as input data, and the measured moment-rotation curve is considered as the output data for the neural network.

Chen et al. Their model consists of two parts: Neural Emulator Network to represent the structure to be controlled; and Neural Action Network to determine the control action on the structure. The neural em- ulator network is constructed with a four-node input layer, a four-node hidden layer, and a one-node output layer.

The input data to the network are displace- ments on the roof and ground floor, the output of the network is the displace- ment on the roof at the next time step. StructNet[16] a neural network model to select the most effective structural member materials given a building project's attributes. The input for StructNet include construction pro- ject parameters, which include information about the building design and the project constraints.

The output for the network is the percentage of times a par- ticular type of beam, column, or slab was used for similar training project. The neural network contains 15 input nodes, 1 hidden layer with 15 nodes, and 14 output nodes. Kang and Yoon[17] developed a two-layer neural network single-layer per- ceptron for truss design. The type of truss used in their model is a simple one and the problem considered is the selection of economical member areas that will satisfy stresses requirements.

The input data to the network are horizontal and vertical loads at the truss joints. The output of the network is the design are- as of all members of the truss. The input param- eters include joist span length; joist spacing; live load. The output parameter for the network are the steel bar joist type and size. The processing units in the in- put and output layers of the neural network represent stresses, strains and their increments.

The neural network has six units in the input layer and two units in the output layer, and the six input units are two stresses, two strains, and two stress increments; and the output units are two strain increments.

INTRODUCTION TO ARTIFICIAL NEURAL NETWORKS ANN

Structural Damage Assessment Elkordy et al. The network was to detect the damage and to determine its class. The network was composed of eight input nodes, nine hidden nodes, and two output nodes. The neural network model developed can diagnose damage based on simultaneous consideration of different data sets representing different signa- tures of the structures. Kirkegaard and Rytter[21] developed a Multilayer Perceptron MLP network for damage assessment of cracked straight steel beams based on vibration measurements.

The MLP network was trained using the back-propagation algo- rithm, and the 5 lowest modes of the bending natural frequencies were used as training tests. A four layers neural network was used with 5 input nodes, 7 nodes in each of the two hidden layers and 2 output nodes.

The output nodes give the crack location and size. Stephens and VanLuchene[22] explored the use of ANNs for the damage as- sessment in determining the safety condition of a structure following a seismic event. The input to the neural network consists of three types of damage indi- ces: The network contained single middle layer with seven nodes per layer. Begum et al. FEM has been used to simulate defective components subject to impact-echo systems. The output of the FEM results are interpreted using the neural network which predicts the depth range of the crack in the concrete surface.

The input to the network is the amplitudes at equidistant sampling points on the amplitude- frequency spectrum. The output of the network is the overall probability of the defect interface occurring within a given depth range. Watson et al. The neural network detects a pile shape directly from its spec- trum.

The output from the network was the pile position of fault and length. Takahashi and Yoshioka[26] developed a neural network system for fault identification in a beam structure. The network used has 4 layers with 5 input nodes, 5 nodes and 6 nodes in each hidden layers and 2 output nodes. The net- work was trained with numerical values of the relative changes of the lowest five natural frequencies of the beams.

Szewczyk and Hajela[27] developed a counter-propagation ANN for structural damage detection. The network was used to model the inverse mapping be- tween a vector of the stiffness of individual structural elements and the vector of the global static displacements under a testing load. The network has one in- put layer, one hidden layer, and one output layer.

The input to the network is the static displacement vector and the output is the Young's moduli. Structural Analysis Flood and Kartam[28, 29] introduced the concepts, theoretical limitations, and efficiency of ANN with reference to a simple structural analysis example.