In this paper, we present a visual sensor network in which each node embeds computer vision logics for analyzing in real time urban traffic. The nodes in the network share their perceptions and build a global and comprehensive interpretation of the analyzed scenes in a cooperative and adaptive fashion.
This is possible thanks to an especially designed Internet of Things IoT compliant middleware which encompasses in-network event composition as well as full support of MachineMachine M2M communication mechanism. The potential of the proposed cooperative visual sensor network is shown with two sample applications in urban mobility connected to the estimation of vehicular flows and parking management. Besides providing detailed results of each key component of the proposed solution, the validity of the approach is demonstrated by extensive field tests that proved the suitability of the system in providing a scalable, adaptable and extensible data collection layer for managing and understanding mobility in smart cities.
Keywords: visual sensor networks; real time image processing; embedded vision; IoT middleware; internet of things; intelligent transportation systems; smart cities 1. There is thus a strong interest in making our cities smarter by tackling the challenges connected to urbanization and high density population by leveraging modern Information and Communications Technologies ICT solutions.
Indeed, progresses in communication, embedded systems and big data analysis make it possible to conceive frameworks for sustainable and efficient use of resources such as space, energy and mobility which are necessarily limited in crowded urban environments. In particular, Intelligent Transportation Systems ITS are envisaged to have a great role in the smart cities of tomorrow. ITS can help in making the most profitable use of existing road networks which are not always further expandable to a large extent as well as public and private transport by optimizing scheduling and fostering multi-modal travelling.
Indeed, the cruising-for-park issue is a complex problem which cannot be solely ascribed to parking shortage, but is also connected to fee policies for off-street and curb parking and to the mobility patterns through the city in different periods and days of the week.
Indeed, cheap curb parking fees usually foster cruising, with deep impacts on global circulation, including longer time for parking, longer distances and a waste of fuel, resulting thus in incremented emissions of greenhouse gas. As such, the problem cannot be statically modeled since it has clearly a spatio-temporal component [2], whose description can be classically obtained only through detailed data acquisition assisted by manual observers which—being expensive—cannot be routinely performed.
Nowadays, however, ITS solutions in combination with the pervasive sensing capabilities provided by Wireless Sensor Networks WSN can help in tackling the cruising-for-parking problem: indeed by the use of WSN it is possible to build a neat spatio-temporal description of urban mobility that can be used for guiding drivers to free spaces and for proposing adaptive policies for parking access and pricing [3].
More generally, pervasive sensing and ubiquitous computing can be used to create a large-scale, platform-independent infrastructure providing real-time pertinent traffic information that can be transformed into usable knowledge for a more efficient city thanks to advanced data management and analytics [4].
A key aspect for the success of these modern platforms is the access to high quality, high informative and reliable sensing technologies that—at the same time—should be sustainable for massive adoption in smart cities. Among possible technologies, probably imaging and intelligent video analytics have a great potential which has not yet been fully unveiled and that is expected to grow [5].
Indeed, imaging sensors can capture detailed and disparate aspects of the city and, notably, traffic related information. Thanks to the adaptability of computer vision algorithms, the image data acquired by imaging sensors can be transformed into information-rich descriptions of objects and events taking place in the city. In the past years, the large scale use of imaging technologies was prevented by inherent scalability issues.
Indeed, video streams had to be transmitted to servers and therein processed to extract relevant information in an automatic way. Nevertheless, nowadays, from an Internet of Things IoT perspective it is possible to conceive embedded vision nodes having on-board suitable logics for video processing and understanding. Such recent ideas have been exploited in cooperative visual sensor networks, an active research field that extends the well-known sensor network domain taking into account sensor nodes enabled with vision capabilities.
However, cooperation can be meant at different levels. For instance in [6] road intersection monitoring is tackled using different sensors, positioned in such a way that any event of interest can always be observed by at least one sensor.
Instead in [7] cooperation among nodes is obtained by offloading computational tasks connected to image feature computation from one node to another.
With respect to these previous works, one of the main contributions of this paper is the definition and validation of a self-powered cooperative visual sensor network designed for acting as a pervasive roadside wireless monitoring network to be installed in the urban scenario to support the creation of effective Smart Cities.
Such an ad hoc sensor network was born in the framework of the Intelligent Cooperative Sensing for Improved traffic efficiency ICSI project [8], which aimed at providing a platform for the deployment of cooperative systems, based on vehicular network and cooperative visual sensor network communication technologies, with the goal of enabling a safer and more efficient mobility in both urban and highway scenarios, fully in line with ETSI Collaborative ITS C-ITS.
In this direction, and following the ICSI vision, the proposed cooperative visual sensor network is organized as an IoT-compliant wireless network in which images can be captured by embedded cameras to extract high-level information from the scene. The cooperative visual sensor network is responsible for collecting and aggregating ITS-related events to be used to feed higher levels of the system in charge of providing advanced services to the users.
More in detail, the network is composed of new custom low-cost visual sensors nodes collecting and extracting information on: i parking slots availability, and ii traffic flows. Further, the first set of collected data regarding parking can be used in the Smart City domain to create advanced parking management systems, as well as to better tune the pricing policies of each parking space. The second set of data related to vehicular flows can be used for a per-hour basis analysis of the city congestion level, thus helping the design of innovative and adaptive traffic reduction strategies.
Extraction and collection of such ITS-related data is achieved thanks to the introduction of novel lightweight computer vision algorithms for flow monitoring and parking lot occupancy analysis, which represent another important contribution of this paper; indeed, the proposed methods are compared to reference algorithms available in the state of the art and are shown to have comparable performance, yet they can be executed on autonomous embedded sensors.
As a further point with respect to previous works, in our proposal, cooperation is obtained by leveraging a MachineMachine M2M middleware for resource constrained visual sensor nodes. In our contribution, the middleware has been extended to compute aggregated visual sensor node events and to publish them using M2M transactions.
In this way, the belief of each single node is aggregated into a network belief which is less sensitive either to partial occlusion or to the failure of some nodes.
Even more importantly, the visual sensor network and the gain in accuracy that is possible to obtain thanks to the cooperative approach is not only proved through simulation or limited experimentation in the lab, but is shown in a real, full-size scenario. Indeed, extensive field tests showed that the proposed solution can be actually deployed in practice, allowing for an effective, minimally invasive, fast and easy-to-configure installation, whose maintenance is sustainable, being the network nodes autonomous and self-powered thanks to integrated energy harvesting modules.
In addition, during the tests, the visual sensor network was capable of gathering significant and precise data which can be exploited for supporting and implementing real-time adaptive policies as well as for reshaping city mobility plans. The paper is organized as follows. Related works are reviewed in Section 2, focusing both on architectural aspects and computer vision algorithms for the targeted ITS applications.
In Section 3 the main components used for building the cooperative visual sensor network are introduced and detailed, while in Section 4, the findings of the experiments for the validation of the embedded computer vision logics Section 4.
Section 5 ends the paper with ideas for future research. Related Works With the increasing number of IoTdevices and technologies, monitoring architectures have moved during the years from cloud based approaches towards edge solutions, and more recently to fog approaches.
The main drivers of this progressive architectural change have been the capabilities and complexities of IoT architectural elements. As the computational capacity of devices has increased, the intelligence of the system has been moved from its core cloud-based data processing to the border edge-computing data analysis and aggregation , pushing further until reaching devices fog-computing approach to spread the whole system intelligence among all architectural elements [9].
One of the main features of fog computing is the location awareness. Data can be processed very close to their source, thus letting a better cooperation among nodes to enhance information reliability and understanding.
Visual sensor networks in the ITS domain have been envisioned during the years as an interesting solution to extract high value data. The first presented solutions were based on an edge-computing approach in which whole images or high-level extracted features were processed by high computational nodes located at the edge of the monitoring system. Works such as [10,11] are just an example of systems following this approach. More recent approaches leverage powerful visual sensor nodes, thus proposing solutions in which images are fully processed on-board, thus exploiting fog-computing capabilities.
Following this approach, several works have been proposed in the literature by pursuing a more implementation-oriented and experimental path, works such as [12,13] must be cited in this line of research, and, more recently a theoretical and modeling analysis [14].
Sensors , 17, 4 of 25 By following a fog computing approach, the solution described in this paper proposes i a visual sensor network in which the logic of the system is spread among the nodes visual sensors with image processing capabilities , and where ii information reliability and understanding is enhanced by nodes cooperations cooperative middleware exploiting location awareness properties. As discussed in the Introduction, the proposed visual sensor network is applied to two relevant ITS problems in urban mobility, namely smart parking and traffic flow monitoring.
Nowadays, besides counter-based sensors used in off-street parking, most smart parking solutions leverage two sensor categories, i. The first category uses either proximity sensors based on ultrasound or inductive loop to identify the presence of a vehicle in a bay [15,16].
Although the performance and the reliability of the data provided by this kind of sensors are satisfactory, nevertheless installation and maintenance costs of the infrastructure have prevented massive uptake of the technology, which has been mainly used for parking guidance systems in off-street scenarios.
Camera-based sensors are based on the processing of videos streams captured by imaging sensors thanks to the use of computer vision methods.
In [17] two possible strategies to tackle the problem are identified, namely the car-driven and the space-driven approaches. In car-driven approaches, object detection methods, such as [18], are employed to detect cars in the observed images, while in space-driven approaches the aim is to asses the occupancy status of a set of predefined parking spaces imaged by the sensor.
Change detection is often based on background subtraction [19]. For outdoor applications, background cannot be static but it should be modeled dynamically, to cope with issues such as illumination changes, shadows and weather conditions. Other approaches are based on machine learning, in which feature extraction is followed by a classifier for assessing occupancy status. For instance, in [22], Gabor filters are used for extracting features; then, a training dataset containing images with different light conditions is used to achieve a more robust classification.
In [25], which can be considered as the state of the art, Support Vector Machines SVM [26] are used to classify parking space status on the basis of LPB and LPQ features and an extensive performance analysis is reported. Deep learning methods have also been recently applied to the parking monitoring problem [27].
All the previously described methods are based on the installation of a fixed camera infrastructure. Following the trends in connected and intelligent vehicles, however, it is possible to envisage novel solutions to the parking lot monitoring problem. For instance, in [28], an approach based on edge computing is proposed in which each vehicle is endowed with a camera sensor capable of detecting cars in its field of view by using a cascade of classifiers. Detections are then corrected for perspective skew and, finally, parked cars are identified locally by each vehicle.
Single perceptions are then shared through the network in order to build a precise global map of free and busy parking spaces. A current drawback is that the method can provide significant results and a satisfactory coverage of the city only if it is adopted by a sufficient number of vehicles. Similarly, several methods based on crowd-sourcing [29] have been reported in the literature, most of which rely on location services provided by smart-phones for detecting arrivals and departures of drivers by leveraging activity recognition algorithms [30,31].
For what regards traffic flow monitoring, the problem has received great attention from the computer vision community [32], even for the specific case of urban scenario [33]. Nevertheless most of the existing methods use classical computer vision pipelines that are based on background subtraction followed either by tracking of the identified blobs or of the detected vehicles see, e.
Such approaches are too demanding in terms of computational resources for deployment on embedded sensors. Among the various approaches and algorithms used, only few of them are for a real-time on site processing: among them, Messelodi et al. A similar approach is reported in [36] where the tracking is based on Optical-Flow-Field estimation following an automatic initialization i.
Another feature-based algorithm for detection of vehicles at intersection is presented in [37], but again the used hardware is not reported, and the tests seem to be performed only on lab machines. An interesting embedded real-time system is shown in [38], for detecting and tracking moving vehicles in nighttime traffic scenes.
In this case, they use a DSP-based embedded platform operating at MHz with 32 MB of DRAM, their results on nighttime detection are very good, but yet their approach did not have to cope with low-energy constraints. Lai et al. However, the used hardware is not described and the final acceptable processing frame rate is around 2. The same problem arises in [41], where they perform real-time vision with so-called autonomous tracking units, which are defined as powerful processing units.
Their test-bed are parking in particular airport parkings with slow moving vehicles and their final processing rate is again very low i. System Architecture and Components In this section, the system architecture is reported before presenting the prototyped visual sensor node and the two key components that enable the creation of a cooperative visual sensor network: namely, computer vision logics, especially designed for deployment on embedded sensors, and the IoT middleware.
The vision logics are meant to be deployed on board each single sensor in the visual sensor network, which is then able to provide autonomously its interpretation of the scene at sensor level.
These local processing results are then integrated and converted into a more advanced, robust and fault tolerant understanding of the scene at network level [43], leveraging a middleware layer capable of event composition over resource constrained sensor networks. Both the computer vision logics and the middleware solution have been used in the design of a visual sensor network, that has been physically deployed and validated on the smart camera prototype described in Section 3.
System Architecture and Visual Sensor Prototype The high level system architecture of the deployed monitoring sensor network for urban mobility is reported in Figure 1. It is mainly composed of three components: i the visual sensor devices able to exploit on board processing capabilities of the scene while providing M2M communication, ii the system gateway, acting as connection point between devices belonging to the visual sensor network and the Internet world, and iii the remote server in which detected events and data are stored for both analytic purposes and visualization on web portals.
The following of the section focuses on the monitoring part of the system by reporting motivation and design choices behind the realization of the visual sensor node. Although many nodes that might support the creation of a visual sensor network are currently available see, e.
Actually, M2M communication is seen to be a key aspect to drive the shift from classical cameras and centralized video processing architecture to smart cameras and distributed video analytics over heterogeneous sensor networks. For these reasons, the design of an integrated IoT node was addressed, taking into account several requirements both from the functional and non-functional perspective.
Indeed, the node should have enough computational power to accomplish the computer vision task envisaged for urban scenarios as described in Section 3. In this way, the nodes might be used to setup an autonomous, self-powered network in the city, using whenever possible photo-voltaic panels or other energy harvesting opportunities.
Low cost components and architecture, in addition, guarantee that —once engineered— the node can be manufactured at low cost in large quantities, which is a fundamental aspect for the sustainability and wide scale adoption of the proposed technology.
As for network communication, the node should be ready to support the interfaces needed in the IoT domain and, in particular, to support the middleware described in Section 3. Figure 1. System architecture. Inside the node, two main logical components have been identified, corresponding to networking and vision aspects. In particular the networking component takes care of communication both by managing M2M transactions and by interacting with the vision processes, e. Thus, networking component needs to be operating most of the time to guarantee responsiveness of the node to incoming requests and it must be low-consuming, but no high computational resources are needed.
By converse, the vision component consumes many computational and energetic resources to run the algorithms reported in Section 3. It is worthwhile to notice that the visual sensor network is intrinsically privacy-preserving. Indeed, images are processed locally at sensor level, without the need to transfer them to other locations, and then disregarded. Therefore, confidential information contained in the acquired images is not at risk, since they are neither stored nor accessible from a centralized location.
Although it was not a primary scope of this paper and, thus, it has not been taken into account in the implementation , a further security level on the communications inside the visual sensor network can be added; for instance, security concerns might be addressed using the methods proposed in [46], where an elliptic curve cryptography approach for IoT devices is adopted and applied to the smart parking domain.
This architecture has the advantage to integrate within it a PMU Power Management Unit , in addition to numerous peripheral interfaces, thus minimizing the complexity of the board.
Thus, the board can be printed also in a small number of instances. The chosen architecture has been proved to have an average consumption measured at the highest speed MHz less than mW, which makes it suitable for using energy harvesting strategies. A microSD slot is present, which is essential for booting the system, booting the kernel and file-system associated EXT4 ; the board can be upgraded simply by changing the contents of the microSD.
The selection of a low-cost device brought to an easy-to-buy and cheap camera, the HP HD Webcam, that has been used during the experimentation. Embedded Vision Logics for Visual Sensor Networks In the proposed cooperative visual sensor network, each node consists of an embedded sensor equipped with vision logics able to perform real-time scene understanding. The goal of this analysis is two-fold: i exact detection of parking slot availabilities in a parking lot, and ii traffic flow analysis on relevant roads for parking.
The major issue described here is the balance between the need to take into account low cost, scalability, and portability requirements, and the production of reliable and efficient computer vision technologies deployable on an IoT smart object. Among the various scenarios, the differences in specific requirements are substantial: real-time constraints for traffic flow versus so-called near real-time i.
In the following, the two specific scenarios and the solutions implemented for solving them are analysed separately. Parking Lot Availability Scenario As discussed in Section 2, the car-driven and the space-driven approaches are the two main strategies to deal with the problem of detecting parking lot vacancies. In car-driven approaches, features detection methods are employed to detect cars in the observed images, while in space-driven approaches the aim is to detect empty spaces rather than vehicles.
Recent works proposed in the literature show very good performance see Table 2 in the next section. Although this scenario, as mentioned above, allows for less restrictive processing constraints e.
Various algorithms have been studied and designed to be deployed on the proposed visual sensor network for the analysis of parking lot occupancy status.
The methodology chosen is based alternatively on the analysis of single frames, or on frame differencing, in order to highlight the changes in the Regions of Interest RoI with respect to an adaptive GMM background reference image [48]. In order to improve the robustness of the algorithm with respect to environmental light changes, normalized versions of the images are computed and used, with respect to global illumination parameters average and variance of both the current and reference image.
To improve computational efficiency, image analysis and frame differencing for detecting changes are performed only on predetermined RoI in the acquired frame.
Each of the RoI corresponds to a specific parking slot, it is set up manually with the help of a graphic tool, and can be of any polygonal shape this helps to avoid occlusion like trees, poles or other static artifacts.
In Figure 2A the green zones are the defined parking lots RoI. For each of the regions a confidence value of the occupancy is computed in real-time. The output of a single node is a vector of probability values of the occupancy for each parking slot.
Sensors , 17, 8 of 25 In the first phase, the input image is evaluated with respect two lightweight algorithms a car-driven approach and a space-driven one. The car-driven method searches for car features in the predefined RoI.
A fast edge-detector, the Canny operator [49] is used to obtain a very crisp image of the vehicles contours of the frame acquired at time t as it is shown in Figure 2B; we guess if a vehicle occupies the RoI Rk calculating the index ek t which is proportional to the ratio of edge pixels with respect to the square root of total number of pixels in Rk , i.
This index cannot be used if a pole or a tree partially create an occlusion of the slot they can be misinterpreted as a presence in the lot. Figure 2. A RoI for a set of parking lots are set up manually with the help of a graphic tool. Small rectangles on the driveway define the samples for asphalt detection B The output of the Canny edge detector C White regions represent areas where asphalt is detected D Current input image with augmented reality displaying the status E Background image F Frame differencing to detect major status changes.
For every input frame, the asphalt detection acts like a binary filter see Figure 2C The index ak t is the ratio of asphalt pixels and total pixels of a RoI, i. However the presence of different non gray area like windows, tires, plates and so on do never scores like plain asphalt. After the first k frames being k a number between 10 and 30 the background image is available see Figure 2E and from that moment all the major events are determined by frame differencing.
This is a very reliable detector also with our 0. If a major change happens in a RoI, it is very easy to be detected and the system keeps track of the history of these changes. Traffic Flow Monitoring Scenario On the contrary to the previous scenario, in this one restrictive processing constraints exist, due to the need to detect not only all the passing vehicles but also in view of a deeper analysis of the traffic flow, i.
With respect to the classical computer visions techniques reviewed in Section 2, an ad hoc lightweight method was designed which is suitable for deployment on embedded devices. For the deployment on the visual sensor network, background detection is performed only on small quadrangular RoI which are sufficient for modelling physical rectangles under a perspective skew.
In addition, lightweight methods are implemented for background modelling, that is determining if a pixel belongs to foreground i. Step 4: Distributed Bellman-Ford shortest-path algorithm is applied using the calculated cooperation-based link IV.
Then, we discuss the impact of coop- and Pi,j is the minimum possible transmission power from erative cooperation on the routing in specific regular wireless node i to node j. Proposed Routing Algorithms of neighboring nodes of node i and Dj represents the latest First, we propose a cooperation-based routing algorithm, estimate of the shortest path from node j to the destination. The MPCR algorithm takes into consideration the the cooperative transmission mode or the direct transmis- sion mode is implemented.
In the cooperative transmission cooperative communications while constructing the minimum- mode, the first, second, and third nodes behave as the power route. The derived power formulas for direct transmis- sender, relay, and receiver, respectively, i. It can be distributively imple- node as discussed in the cooperative transmission mode.
In the direct transmission mode, the first node is the sender and the third node is the destination. The transmission mode In the conventional Bellman-Ford shortest path algorithm, that requires less power is chosen.
Table I describes of the shortest path from node j to the destination [18] that the MPCR algorithm in details.
The required Second, we propose a cooperation-based routing algorithm, transmission power between two nodes is the minimum power namely, Cooperation Along the Shortest Non-Cooperative Path obtained by searching over all the possible nodes in the CASNCP algorithm. If there is no available relay in the heuristic algorithms proposed by Khandani et al.
However, it is implemented algorithm is implemented at each node. Each node updates in a different way using the proposed cooperation-based link its cost toward the destination as cost formula.
First, it chooses the shortest-path route. However, this can cause significant increase in communication and computation burdens, and the performance increase might be sufficiently small. In other words, adding more relays might not be cost effective, and mode utilizing node x as a relay. In other words, if node z the proposed scheme is optimal in the sense of up to one does not receive the data correctly from node y, then node x relay case.
Thus, the total transmission power to transmit the data from node x to node z is B. As where node x needs to transmit its data to node z. Thus, the required power algorithm. Node x transmits the data block between sender i and receiver j. If node z does not decode the data correctly, algorithm applies cooperative transmission mode on each three then node y retransmits the data if it has correctly decoded it consecutive nodes in the SNCP route, i.
In the following, we calculate the average phases as follows. First, node x transmits its data directly required transmission power by each algorithm in a linear to node y utilizing direct transmission mode. Second, node network.
The probability a of choosing a certain node is N1. Considering one direction only e. If l is even, there exist 2l cooperative transmission blocks and each block requires a total power of Ptot C 2 d0 , d0 , d0. If l is odd, then a direct transmission mode is done over the last hop. Route chosen by the three routing algorithms in grid wireless network.
The average end-to-end transmission power for any routing scheme can be calculated by substituting the corresponding power formulas, which are 31 , 32 , and 33 for the MPCR, where i, j denotes the direct transmission mode from node CASNCP, and SNCP, respectively in Performance Analysis: Regular Grid Networks nodes indicates the direct transmission mode. The SNCP routing algorithm chooses sender x, relay y, and destination z. In this example, we can visually notice The probability of requires the least transmission power for any particular number choosing a certain node to be the source or the destination is of hops.
As described before, combinations, in which the source and the destination are the let i and j denote the number of hops between the source 2 1 same. In the and the destination in the horizontal and vertical directions, following, we consider only the lower triangular part, i. This determines the relative pairs in one direction e.
As shown, the MPCR This result is very similar to the one in 30 with considering algorithm requires the least transmission power for any source- the nodes on two dimensions instead of one dimension only in destination pair.
For the third component in 36 i. Similarly, Fig. Required transmission power per route versus the number of hops in regular a node linear network, b node grid network.
The additive and SNCP routing algorithms. Given a certain network topology, 34 versus the network size for the network setup defined we randomly choose a source-destination pair and apply the above. For each algorithm, we calculate the Authorized licensed use limited to: University of Maryland College Park.
MPCR w. Network Size, N Fig. Finally, these quantities are averaged over different network topologies. First, we illustrate the effect of varying the desired through- Therefore, the CASNCP algorithm is limited to applying the put on the required transmission power per route. Moreover, the MPCR algorithm requires the transmission power by any routing algorithm decreases with least transmission power among the other routing algorithms.
Intuitively, the higher the number of One of the major results of this paper is that the MPCR nodes in a fixed area, the closer the nodes to each other, the algorithm requires less transmission power than the CAS- lower the required transmission power between these nodes, NCP algorithm.
Intuitively, this result is because the MPCR which results in lower required end-to-end transmission power. In addition, we have considered regular linear and grid networks, and we have derived the analytical 8 results for the power savings due to cooperation in these cases. Chang and L. Younis, M. Youssef, and K. Computer Networks, vol. Feeney and M. For the cooperative [4] B. Zhang and H. Shakkottai, T. Rappaport, and P.
As shown, the routes constructed by either the Trans. Wireless Commn. Laneman, D. Tse, and G. Theory, vol. Moreover, the average number of hops increases with N as [8] A. Ibrahim, A. Sadek, W. Proposed protocol utilizes the merits of both direct and cooperative transmission to achieve reliable and quick data delivery and greater network stability period. Incremental relay- based cooperation is utilized to improve energy efficiency of the network.
At relays, Detect-and-Forward DF technique is used, whereas, selection combining technique is utilized at sink. Cooperative routing is a promising technique which exploits the broadcast nature of wireless medium to enhance network performance. Sensor nodes simultaneously transmit their data on different links and utilize cooperation between nodes Sensor nodes simultaneously transmit their data on different links and utilize cooperation between nodes.
The protocol utilizes the merits of both direct and cooperative transmission to achieve higher stability period and end-to-end throughput with greater network lifetime. In such networks, designing an energy efficient routing topology is of main concern.
For this purpose, we Simulation results of CEMob are compared with that of contemporary routing protocols i. Simulation comparison shows that CEMob has reduced energy consumption and is more energy efficient than the compared protocols. Wireless Sensor Networks. Energy conservation is one of the most important factors in Wireless Sensor Networks WSNs for reliability since nodes have limited resources of energy.
There is a need to design such routing protocols, which efficiently use available
0コメント