Skip to main content

Firefly Clock Synchronization in an 802.15.4 Wireless Network

Abstract

This paper describes the design and implementation of a distributed self-stabilizing clock synchronization algorithm based on the biological example of Asian Fireflies. Huge swarms of these fireflies use the principle of pulse coupled oscillators in order to synchronously emit light flashes to attract mating partners. When applying this algorithm to real sensor networks, typically, nodes cannot receive messages while transmitting, which prevents the networked nodes from reaching synchronization. In order to counteract this deafness problem, we adopt a variant of the Reachback Firefly Algorithm to distribute the timing of light flashes in a given time window without affecting the quality of the synchronization. A case study implemented on 802.15.4 Zigbee nodes presents the application of this approach for a time-triggered communication scheduling and coordinated duty cycling in order to enhance the battery lifetime of the nodes.

1. Introduction

In South-East Asia, huge swarms of fireflies synchronously emit light flashes to attract mating partners [1]. This paper describes the adaption of the underlying biological principle for a robust self-stabilizing distributed synchronization in wireless sensor networks.

An ensemble of nodes is synchronized in order to execute a collision-free communication schedule following a time-triggered paradigm [2]. The basic element of a time-triggered system is a global timebase that is distributed among the nodes through clock synchronization. In order to provide a common timebase we propose the application of Reachback Firefly Algorithm (RFA), which is a Firefly-inspired algorithm that works despite the limitations of current radio controllers, which are deaf to incoming transmissions while in sending mode. This deafness problem is mitigated by distributing the timing of light flashes in a given time window. Using the global timebase, communication activities are scheduled according to a predefined, periodic scheme. This simple but robust scheme enables the design of dependable distributed systems and simplifies system verification and diagnosis. Furthermore, the global synchronicity is used to enable synchronized sleep schedules in a wireless network cluster which can save a considerable amount of energy at each node. This is especially useful in situations with low duty-cycles, for example, a sensor network that is utilizing only a fraction of its available bandwidth. Due to the a priori known message schedule, the synchronized nodes are then able to predict the timing of incoming messages and can turn off their receivers when no transmissions of interest are scheduled. Since listening on the channel is a significant energy consumer of a typical wireless sensor node, the overall consumer power can thus be reduced in favor of battery lifetime. The global time can also support the application in tasks like timestamping, synchronous measurements, and timely coordinated distributed actions.

As a proof of concept, the algorithm has been evaluated by simulation and in a case study consisting of a network of battery-powered low-cost nodes based on an off-the-shelf IEEE 802.15.4 MAC layer. The evaluation results in this paper give realistic figures for the precision of the clock synchronization and the achievable savings in power consumption.

The rest of the paper is structured as follows. Section 2 describes the basic features and operation of the RFA. Section 3 presents the design of our approach consisting of clock synchronization, a modified RFA and an energy saving scheme. Sections 4 and 5 describe the evaluation of a case study implementation by simulation and on real hardware. Results are discussed in Section 6. Related work is treated in Section 7. The paper is concluded in Section 8.

2. Reachback Firefly Algorithm

The RFA was introduced in [3] and supports scalability, graceful degradation, and a simple calculation. The algorithm can be classified as a self-stabilizing distributed push-based clock synchronization algorithm. The advantage is that it naturally provides self-stabilization, that is, in any initial configuration, the clocks eventually become synchronized. The concept is based on the Pulse-Coupled Biological Oscillators (PCO) phase advance synchronization model [4], but with the difference that it is more appropriate for the practical implementation in wireless networks. For instance, the following assumptions from the original PCO model make a practical application very difficult. The oscillators have identical dynamics. Nodes can instantaneously fire. Every firing event must be observed immediately. All computations are performed perfectly and instantaneously.

To understand the principle behind the main concept of the PCO model, consider the following simple example: Assume two persons and want to synchronize their wrist watches but can only inform the other one if the own watch indicates twelve o'clock. Let and denote the time of the persons' clocks. Every time a person is notified, it advances the own watch by a factor (in our example ) to at most twelve o'clock. The higher, the multiplication factor, the faster the clocks converge, but the system becomes less robust to faulty notifications then. This algorithm describes the simplified phase advance synchronization model of the fireflies, which is described in more detail in the next section. Based on the initial configuration and , Table 1 shows that after 5 periods the clocks are synchronized.

Table 1 A demonstration of the PCO model. The columns correspond to the ongoing time sequence.

However, in the case all clocks are synchronized, they will indicate the clock event at the same time. Using a broadcast communication medium, this causes message collisions, and a "deafness" problem in many wireless systems, since standard wireless transmitters cannot receive messages while being in transmission mode.

The problem can be bypassed by sending the synchronization messages with a random offset, while transmitting the particular offset with the message. The receiver can then reconstruct the intended synchronization instant and perform a clock adjustment with respect to the received offset values. Obviously, this random offset results in an out-of-order reception of synchronization messages which causes a problem in the case of the simple synchronization approach. A solution to this problem is to gather all synchronization events until reaching the period end and then react to the received time information from the last period. This idea was introduced in [3] and is called reachback response. However, a reachback response variant of the mentioned simple synchronization approach then equals the below described RFA algorithm with and is proven in Lemma 3.7 to be unfeasible for clock synchronization.

The formal description is based on the phase variable . This variable is characterized by (i) , where denotes the cycle period and (ii) at the beginning of a cycle. Let denote the state variable corresponding to the charge of a firefly. The authors of the PCO model have proven that the state function must be a smooth, monotonically increasing, and concave down function in order to achieve synchronicity. In [4], Mirollo and Strogatz have stated a general state function as shown in (1) whereas the form of the curve depends on a parameter named dissipation factor, denoted by , and measures the extent to which is concave down. Figure 1 visualizes the state function for different dissipation factors,

(1)

The coupling between the oscillators is defined by the firing function and depends on the state function and the pulse strength , where denotes the inverse state function

(2)

The firing function is calculated immediately after an oscillator receives a firing event (or flash in case of a firefly). We further use the term of phase advance to define the increase in the phase domain, denoted by . Due to the concave down state function, a constant addition in the state-domain results in a variable increase in the phase-domain where a phase advance in the beginning of a cycle is smaller than later in the cycle.

Figure 1
figure 1

The state function dependent on different dissipation factors.

To combat the assumption problems of the PCO model in wireless networks, the RFA additionally uses the notion of a reachback response and pre-emptive message staggering. Pre-emptive message staggering means that a node broadcasts its synchronization message with some random time offset before it reaches the period end and thus is able to gather the time information of all other nodes during a period with a lower probability of message collisions.

In the original PCO model, an oscillator immediately reacts to each firing event. In contrast, the reachback response records the timestamps of all received firing events and calculates an overall phase jump once at the end of each period which is then applied at the beginning of the next cycle. Thus, if a node reaches the period end, it "reaches back in time" and reacts to the firing events of the past period. This principle is visualized in Figure 2.

Figure 2
figure 2

Comparison of (a) the original PCO model and (b) the RFA. In the PCO-model, an oscillator immediately reacts to a firing event. In contrast, The RFA applies the overall phase jump at the beginning of the next cycle: .

A further problem in the PCO model occurs in the case of an already synchronized network comprising several nodes. If so, all nodes will trigger the transmission event for the synchronization message at the same time. As a result, the messages will collide and the collision avoidance mechanism of the CSMA/CA scheme takes effect. The resulting delay jitter then can be avoided by using MAC timestamping. However, the backoff scheme of the IEEE 802.15.4 standard [5] allows to backoff a message at most times which results in a maximum backoff time of at most 36.48 milliseconds (at 2.4 GHz). Since the serialization delay of a full message is at most 4.256  milliseconds (133 bytes at 250 kbps), there can be at most active wireless nodes without losing messages in the best case. Therefore, a bigger network comprising more than nodes in the same broadcast domain requires an additional message staggering delay at an upper layer. A second reason for the additional message staggering is that the original IEEE 802.15.4 standard does not provide an MAC timestamping mechanism and thus does not allow to reduce the delay jitter due to the backoff scheme. The only way to reduce the delay jitter then was to modify the default values of some MAC specific attributes in order to switch off the backoff mechanism. To avoid the resulting higher probability of transmission failures, the pre-emptive message staggering explicitly adds a timestamped random transmission delay to the firing messages at the application layer.

3. Applying RFA to Wireless Sensor Networks

The principal purpose of many protocols used in sensor networks is aimed at reducing the consumed power through synchronized sleep schedules. Such an approach is also referred to as a low duty-cycle concept where the transceiver module of all nodes is periodically activated only for a short time with a period length from seconds up to hours. Our concept allows to perform duty-cycling in a more effective way by utilizing a time-triggered approach where a node takes advantage of the a priori known transmission events. These events are globally coordinated by the use of rounds stored in a file called Round Description List (RODL) file. In the current implementation such a round corresponds to a complete cycle of our synchronization algorithm. A round is further divided into a number of slots. Every node in a network must have its own RODL file and statically assigns a communication activity to each slot in each round. This allows the setup of a collision-free communication and further improves the energy consumption by switching off the transceiver if it is not required. Figure 3 shows the time diagram of a time-triggered approach for a single node. Therein, a period is subdivided into several slots whereas each slot corresponds to either a receiving slot, a sending slot, an execution slot, or an idle slot. Concerning the energy awareness, the most important slots are the receiving slots since they determine how much energy is spent on listening and receiving. In the diagram, the first and the second slot are assigned to be receiving slots. Note that the active time for the receiver unit differs between these slots. This comes from the automatic deactivation after the receiver has recognized the end of a transmission. The parameter denotes the synchronization window and guarantees that the receiver module is enabled some time prior before any transmission takes place.

Figure 3
figure 3

The principle of the time-triggered approach.

The time-triggered approach requires the notion of a global time which is provided by the RFA clock synchronization algorithm. Note that the algorithm can only approximate the global time. The best achievable precision in an ensemble of clocks is lower bounded to the convergence function of the synchronization algorithm and the maximum drift offset , where denotes the maximum drift rate of all clocks in that ensemble. This is also known as the synchronization condition . In our approach, the convergence function is defined by the RFA and heavily depends on the maximum delay jitter which is the maximum absolute deviation of the delay a message encounters during the communication.

In order to get promising results, the global time must be approximated with a very high precision. One way is to minimize the drift offset. This can either be done by using high quality crystal oscillators or a more frequent resynchronization. Both approaches have their drawbacks, because in mass production, crystal oscillators would be expensive compared to the cheap internal RC-oscillators in low-cost nodes. Secondly, a shorter period time results in the exchange of more synchronization messages in the same time and thus would affect the energy consumption. Alternatively, the reduction of the maximum drift rate can also be achieved by a rate correction algorithm. In our approach, this algorithm is performed in the digital domain and makes use of the concept of virtual clocks. A virtual clock abstracts the physical clock by the use of macroticks. A macrotick comprises several microticks which are generated by a physical clock. The principle of this concept is to change the number of microticks representing a macrotick in order to adjust the granularity and frequency of the virtual clock. In the current implementation a macrotick corresponds to a complete cycle length. Thus, the duration of the periods can easily be changed by adjusting the threshold value of the physical timer/counter.

3.1. Clock State Correction

The clock state synchronization is established by the RFA model and uses the definition of the smooth, monotonically increasing, and concave down state function of (1) to calculate the overall phase advance . Consider the dissipation factor and the pulse strength within , then the phase advance equals

(3)

The direct implementation of all these functions would result in a time-consuming calculation process. Therefore, we simplified the equation by inserting the inverse function in (3). Let and , then (3) can be transformed to

(4)

Assuming a strong dissipation factor and a small pulse strength s.t. , then we can replace by the first-order approximation of the Taylor expansion and thus is negligible. The phase advance then can be reduced to

(5)

As a result, we have a linear Phase Response Curve (PRC), where the coupling factor specifies the strength of coupling between the oscillators and depends on the product of the dissipation factor and the pulse strength . This result is similar to the simplified firing function described in [3].

In contrast to the original RFA algorithm, our approach achieves a better synchronization precision and a faster convergence time by indirectly performing a clustering of the received synchronization events. This is done by ignoring all events which are within the phase advance of the last event to which a node reacted. In fact, this corresponds to the introduction of a short refractory period. Additionally, we do not allow a node to react to firing events which would originally occur after the node reached the period end. This ensures that in the case of synchronized nodes, the fastest node then does not advance its phase anymore resulting in a better precision. The algorithm is formally analyzed in more detail and guarantees network synchronization as long as the bounds for several parameters are maintained. Algorithm 1 explains the behavior of this extended RFA (E-RFA) algorithm with the use of pseudocode. The refractory period is implemented by the condition in Line 9. The variable eventset contains the correct phase of all received firing messages and denotes the random amount for the preponed transmission with at most the maximum, respectively, minimum message staggering delay , respectively, .

Since the purpose of this work should demonstrate that such a synchronization approach works with an off-the-shelf communication stack without MAC-timestamping, we have to expect a delay jitter in the order of milliseconds due to the uncertainty in the application and MAC layer. It should be mentioned that Lundelius and Lynch have shown in [6] that in the presence of a maximum delay jitter , an assemble of clocks cannot be synchronized to a precision better than .

Lower Bound for the Coupling Factor

We assume that every processor consists of a hardware clock which generates the phase . This clock stays within a linear envelope of the real time. Note that whereas the hardware clock continuously increases, the phase is periodically reset to with respect to where denotes a dynamic offset value which changes due to the state correction algorithm. represents the granularity of the hardware clock which corresponds to the synchronization period . We therefore assume that there exists a positive constant (maximum drift rate) such that . Note that this definition of the bounded drift simplifies the calculation of the precision and may differ from literature. We further assume a fully connected network in which the message delay is always in the range , where denotes the constant part and the maximum delay jitter of the communication delay in real time. The lower bound for in an ensemble of nodes in a fully connected network then depends on the maximum drift rate , the message staggering delay , and the communication delay . Note that all parameters with a preceding are defined with respect to . However, for simplifaction we now always assume that is normalized to . Let , respectively, denote the maximum, respectively, minimum relative message staggering delay and . We now show that in the case of two clocks, the modified RFA provides a bounded precision . Therefore, for denotes the maximum time difference among all nodes in real time units between the time reached and the time reached .

Lemma 3.1.

Let and be the drift offset. In the case of two clocks and no message loss, if the coupling factor is lower bounded to and , then for and , Algorithm 1 keeps the network synchronized with a worst case precision bounded to

(6)

Proof.

Assume the clocks are initially synchronized to . W.l.o.g. let be the faster node. We further use as the reference for the precision , where denotes the real time when 's phase reached . We further assume that the next time reaches the threshold is at time . Let be the corresponding precision at . For we then have and . Let respectively, denote the relative message staggering delay the node , respectively, has calculated for the last transmission. If the last fire event of was at , then with respect to the communication delay received the phase at and consequently adds the offset leading to . Similarly, a fire event from with offset is received by at phase . Let be the minimum, respectively, maximum possible phases of the calculated firing events. If , then it is guaranteed that , respectively, . Since , we have as stated.

Based on the current precision and the phase advance of and at time labeled by and , we are able to calculate the precision the next time reaches the threshold. That is, . However, we have to distinguish between three cases depending on . In detail, if , then and , or if , then also and , or finally if , then due to Line 4 of Algorithm 1 we have and . Note that the overlapping of and is volitional, because if , then both cases can occur and hence must be considered. Further note that the bound of ensures that the interception point of the phase of both nodes is within the last period. In order to keep the clocks within the precision, the inequality must be valid for all three cases. From the first case we get and . From the third case it follows and . Note that is always valid due to the definition of . From the second case, it can be derived that and . Again, ensures that is valid. The worst case precision with respect to these three cases then equals .

Note that the correctness of the proof requires that a node advances its phase at most once per period. However, if , then may initiate a firing event after already passed the threshold. Simply setting avoids this effect.

In order to get the worst case precision, we further have to incorporate the precision (I) and (II) for all three mentioned cases. In detail, for we additionally have to analyze for case if the equation holds and for case , if and are valid. Similarly for it must be ensured that . From these equations we can derive the following additional bounds: , and . Therefore, if we want bounded between , then must hold. Furthermore, in the case of , we have to adapt the worst case precision to which now equals the worst case upper bound, since all possible cases were considered.

Finally, it should be mentioned that the maximum relative message staggering delay must be smaller than . Otherwise, assume the case where both nodes are initially apart. Then both nodes will never perform a phase advance due to Line 4 of the algorithm.

Note that in the case of a fully connected network comprising more than two nodes, all nodes synchronize to the fastest one due to Line 4 and Line 9 of Algorithm 1. Especially the condition in Line 9 ensures if a node advances its phase due to some received firing event , then all events immediately following some short time after are ignored. This condition is necessary. Otherwise, assume nodes are perfectly synchronized. Consequently, a node would perform times a phase advance, which results in a mutual excitation in the case is very large.

Theorem 3.2.

Let and be the drift offset. In the case of clocks and no message loss, if the coupling factor is lower bounded to and , then for and , Algorithm 1 keeps the network synchronized with a worst case precision bounded to

(7)

Corollary 3.3.

If a fully connected network comprises only of perfect clocks () and the communication network suffers from no delay jitter (), then the network keeps synchronized with a precision of , if .

Note that Corollary 3.3 states that it is sufficient that the network is connected.

Corollary 3.4.

If a fully connected network comprises only of perfect clocks () and the communication network suffers only from delay jitter (, ), then the network keeps synchronized with a precision of , if .

Corollary 3.5.

If a fully connected network comprises of clocks with a maximum drift rate of and the network suffers from no communication delay () and , then the network keeps synchronized with a precision of , if .

Upper Bound for the Coupling Factor

One may ask why not setting such that a node immediately adjusts its phase to a neighboring clock every time receiving a firing message from this clock. However, the following lemmata shows that there exists a basic upper bound which holds for every network.

Definition 3.6.

A firing configuration of a fully connected network comprising nodes is defined to be the concatenation of the phase of node at the time when just reached the threshold for the th time and consequently applied the phase advance .

Lemma 3.7.

In a fully connected network comprising of perfect clocks, if the coupling factor , then the nodes may never become synchronized.

Proof.

The proof is based on the fact that if is too large, then the nodes will infinitely often enter the same firing configuration. Let and be the two participating processors where is the first node reaching the threshold. The initial firing configuration then is with . Next, reaches the threshold leading to with and . The next time reaches the threshold is at with and . Finally again reaches the threshold at with and .

If we assume that , then the phase advance can be reduced to . The same applies to and . Thus, if all three conditions are true, can be redefined to . In other words, the nodes will infinitely often enter the initial firing configuration. We now have to find the lowest where the inequation is valid. Equalizing all three conditions yields and . Thus we get .

Since the algorithm ignores all firing events immediately following some short time after a previous firing event due to Line 9, a node may realize a set of nodes as a single node and therefore Lemma 3.7 also applies to networks comprising more than two nodes. We now exploit the intuition behind Lemma 3.7 and extend this problem to a general network comprising nodes.

Definition 3.8.

is called to be an infeasible firing configuration, if there exists a positive integer such that and the network is not synchronized.

Lemma 3.9.

The maximum phase advance a node can perform in a fully connected network comprising nodes equals .

Proof.

The maximum phase advance occurs if the firing events are at close quarters such that no event is ignored due to Line 9 of Algorithm 1. In detail, assume a node received the firing event at the phases . The first phase advance then equals , where . Due to Line 9 of Algorithm 1, the earliest next time the node performs a phase advance can only be at and equals . Generally, and for . Solving the recursion leads to and thus . Solving the equation for then yields . The overall phase advance thus equals . Since the maximum occurs when , we finally get .

A weak upper bound results from the fact that we do not want a node to perform a phase advance which is greater than and directly follows from Lemma 3.9.

Corollary 3.10.

In a fully connected network comprising of perfect clocks, if the coupling factor , then in every feasible execution a node will never perform a phase advance which is greater than .

Note that even if the weak bound is maintained, it can be shown that there exist infeasible firing configurations. However, due to imprecisions in calculations, the varying short-term drift, the delay jitter, and due to several other indeterministic environmental effects, this bound is generally applicable. A stronger bound results from empirical studies which have shown that infeasible firing configurations do not exist, if the maximum phase advance . The resulting bound for again can be deduced from Lemma 3.9.

Theorem 3.11.

In a fully connected network comprising of perfect clocks, if the coupling factor , then the nodes will never enter an infeasible firing configuration.

Rate of Synchronization

Theorem 3.15 analyzes the time to sync for the case of two oscillators. The authors of [4] have also analyzed the case of oscillators. However, considering a multihop topology requires a more sophisticated solution. For the following proofs, let and denote the initial phase difference between the clocks and with in network .

Lemma 3.12.

The infeasible firing configuration with and is a unique fixpoint and has a phase difference of .

Proof.

If we set , we get and and thus and .

Although this fixpoint is a repeller, the roundoff error in the calculation may cause a node to enter the fixpoint. This is especially a concern if the granularity of the hardware clock is very low. The rate of sync with respect to different initial phase differences is visualized in Figure 4. It is obvious that there exists a special initial configuration which causes the network to enter this fixpoint. To analyze this initial configuration, we first transform the recursion of the dynamic system into a closed term.

Figure 4
figure 4

The rate of sync for different initial configurations with .

Lemma 3.13.

The phase difference of for equals

(8)

where , , , , , , and from Lemma 3.12.

Proof.

Let be the initial firing configuration with where and . The phase difference when reached the threshold for the th time is . From Lemma 3.7 we know that with and . If we substitute for and consider the phase difference of , we get and which yields for . The dissolving of the recursion is left to the reader and leads to the solution as stated.

Lemma 3.14.

There exists a unique initial phase difference where the network eventually enters the fixpoint of Lemma 3.12 and equals with from Lemma 3.13.

Proof.

If the network enters the fixpoint in at some , then we have a phase difference of for with from Lemma 3.12. Using (8) then yields . Since we get and thus . Using and from Lemma 3.13 results in . The initial phase difference then has to be as stated.

Theorem 3.15.

The number of iterations until synchrony is at most with , and from Lemma 3.13 and

(9)

Proof.

Note that either converges to or ) as visualized in Figure 4. Therefore, we simply equate (8) with if or with if . Since for and the multiplicative factor is smaller than , the term with respect to does not influence the rate of sync for larger and hence can be neglected. This leads to the equation as stated.

3.2. Clock Rate Calibration

The concept of clock rate calibration combats the problem of frequency deviations due to the high clock drift of the RC-oscillators usually used in low-cost devices. This approach should allow a longer resynchronization interval with the same synchronization precision. Note that the rate correction can be performed completely independent from the clock state correction scheme.

The core concept of our rate calibration algorithm is that a processor implements a virtual clock which abstracts the hardware clock . The algorithm implemented on then only reads the time from the . We further denote the ticks from the by microticks and that from the by ticks. One tick of comprises several microticks which we denote by . By adjusting , the time duration of one tick can be increased or decreased. Let be the nominal threshold level and the absolute adjustment value s.t. . Note that the corresponding relative adjustment value for equals . In order to perform the rate calibration, every processor periodically broadcasts a synchronization messages . Let , respectively, denote the timestamps of when transmitted and , respectively, the timestamps of when received from . Let be the th message received from and the th message broadcasted. We further assume that is not received at some before is received for . The dependency between the virtual and the hardware clock with respect to some message is characterized as . Note that we assume that the hardware clock is a linear function of real time within a sufficient long period of time. In contrast, the virtual clock is periodically reset with respect to the resynchronization interval. This assumption is required in order to realize a pulse synchronization scheme.

The rate correction algorithm works as follows: based on the timestamp stored in , the receiving processor can calculate 's relative adjustment value in its own granularity, denoted by , with . Therein, the term denotes the latest received adjustment value from , that is, the relative adjustment value contained in the latest received message from . In order to reduce the impact of the delay jitter, we should choose the time interval between the two received messages as large as possible. Note that there still exists an upper limit due to the long-term stability of an oscillator, which is usually in the order of minutes. However, the optimal time interval also depends on the underlying oscillator type. In our case, we store the last received messages from each node and calculate the relative deviation with respect to the buffer size . To visualize the impact of the delay jitter, we replace by , where corresponds to the delay of message in 's clock granularity. From this it follows:

(10)

In our implementation we set reducing the impact of the jitter with respect to the resynchronization period to at about . The next step of the algorithm is to calculate the average relative phase adjustment value of all processors in the broadcast domain where , that is,

(11)

The main challenge, however, concerns the adjustments of to the new relative adjustment value of . Due to natural imprecisions and the delay jitter, a direct adjustment of is inappropriate resulting in a continuous increase or decrease of the real overall average relative adjustment value. This effect is also known as the common-mode drift. A better approach is to implement a parametrized adjustment by the use of a smoothing factor as shown in (12) which ensures that the virtual clocks smoothly converge to the overall average interval,

(12)

In our implementation we have chosen . However, it seems that the common-mode drift cannot be avoided but the effect can be minimized by carefully choosing with respect to the delay jitter. For this reason we have introduced bounds for the relative adjustment value, that is, . Furthermore, if the bound is exceeded, respectively, under-run, we decrease, respectively, increase by some small value before calculating .

3.3. Energy Awareness

The energy consumption is an important quality characteristic of each communication protocol used in sensor networks. Often more than 50 percent of energy is used for idle listening [7]. Therefore, it is necessary to reduce the major energy sources. Some Media Access Control (MAC) protocols have already incorporated such a concept (e.g., S-MAC, T-MAC, etc.). However, we assume that the underlying MAC layer is only responsible for the medium access control and not for energy improvements. For this reason, we assign the tasks for energy reduction to the upper layers.

A usual approach in reducing the consumed power is to periodically turn off the transceiver module if it is not required. A protocol using such a scheme is called a duty-cycle protocol. The duty-cycle is determined to be the ratio between the duration used for listening on synchronization messages to the medium and the duration of the complete period. As already mentioned, the bounded synchronization precision necessitates that the receiver module must be enabled some time prior, before any transmission takes place. This safety margin equals the synchronization window and should be greater than the upper bound of the synchronization precision. A node considers itself to be synchronized, if the maximum absolute deviation to all neighboring nodes is smaller than the specified synchronization window. With respect to the relative message staggering delays , respectively, and the period length , the duty-cycle equals . Note that after a number of periods, each node has to listen to the medium for a full period in order to avoid clique building.

4. Evaluation by Simulation

We evaluated our approach with a probabilistic wireless sensor network simulator called JProwler (available at http://www.isis.vanderbilt.edu/Projects/nest/jprowler). JProwler has been developed by the Institute of Software Integrated Systems at the University of Vanderbilt and is basically configured to simulate the behavior of the Berkeley Mica Motes running TinyOS with the B-MAC protocol. It is a Java version of the Prowler [8] network simulator which is used for verifying and analyzing communication protocols of adhoc wireless sensor networks. Note that B-MAC is very similar to the IEEE 802.15.4 standard, because both implement the same CSMA/CA mechanism. Therefore, by modifying the MAC layer specific constants, the simulator can be used for the ZigBee nodes.

For instance, the transmission time is based on the amount of transmitted data. In our case, the synchronization frame contains the frame identifier (8 bit), the synchronization state (8 bit), the nominal phase offset (16 bit), the phase adjustment value (16 bit), the sender timestamp (32 bit), the tick-number (16 bit), and a checksum (8 bit). In sum, the application needs 13 byte for one synchronization message. The real amount of transmitted data is greater due to the payload of the MAC and the physical layer. According to the 802.15.4 MAC standard [5], the complete payload is about 15 byte (9 byte from the MAC layer and 6 byte from the physical layer). Assuming that the system works in the 2.4 GHz ISM band, the bit rate is 250 kbps. As a result the serialization delay equals and therefore is assumed to be about one millisecond. Note that the propagation delay is negligible. Let be the worst case constant software dependent time of the sender and receiver (including interrupt and buffer handling) used for the transmission, then the worst case constant transmission time is about 2  milliseconds. These parameters are reflected in JProwler's specific MAC constants by the minimum waiting time and the transmission time of a sync-message. Both parameters were modified to 1  milliseconds. To be more realistic, we have set the random waiting time to 2  milliseconds which corresponds to the delay jitter we have observed in several experiments. Note that we do not guarantee the correctness of these values. Similarly to the implementation in the real hardware, the backoff scheme was deactivated in order to reduce the additional resulting delay jitter. The graphical user interface was enhanced by several new dialogs which enables the user to modify various parameters during the simulation. We further extended the simulator by an oscillator model. Thus, every virtual node is based on an oscillator, for example, RC-oscillator or several crystal cuts. This allows the simulation of clock drift and its influence on the clock synchronization. Due to the fact that the frequency of an oscillator heavily depends on the supply voltage and the ambient temperature, the enhanced JProwler also contains the simulation of the ambient temperature. Other new features consider the adjustment of the simulation speed and enabling/disabling nodes during the simulation.

4.1. Experiments and Results

The simulation results discussed in this chapter should give an overview of the achievable quality of our synchronization approach. For this reason, several network topologies have been developed and simulated. The results are compared with respect to different parameter choices, that is, the coupling factor and the number of nodes in the network.

In order to compare the simulation results with the outcomes from [3], the evaluation metrics are similar. Therefore, the two important parameters are the amount of time until the system achieves synchronicity and the quality of precision. The time to sync defines the time until all nodes have entered the synchronization state and is determined by two parameters. These are the synchronization window and the number of required periods a node has to keep within this window. In the following we call the amount of required periods synchronization periods and is set to . A node only enters the sync-state, if the maximum absolute deviation with respect to the other nodes is within the synchronization window for out of the last firing iterations. The definition of the 50th and 90th Percentile Group Spread differs from that one defined in [3], because we only refer to one firing group. Therefore, the group spread in the simulation is defined to be the maximum absolute time difference between any two nodes in the network and thus cannot be greater than half the synchronization interval. We characterize the group spread distribution with the 50th and 90th percentile.

Incorrect results due to settling effects during the startup phase are avoided by postponing the start of the group spread measurement against the time to sync () and the time the experiment ends (). On this account, the group spread measurement is performed only during the interval .

4.1.1. Parameter Settings

Several parameter settings are the same for all experiments and are adapted to simulate the behavior of our testbed environment. For instance, every virtual node is based on a virtual RC-oscillator. According to the datasheet, the real nodes have a nominal frequency of 8 MHz 10%. For this reason, every virtual node encounters a random initial clock drift between −100  milliseconds and +100  milliseconds per second. The general values of the other parameters are denoted in Table 2.

Table 2 The general parameter choice used in all simulator experiments.

4.1.2. Simulation Results

The next paragraphs discuss the simulation results in dependence of several network topologies and parameter choices. Every configuration was simulated over periods.

The All-to-All Topology

The all-to-all communication topology is mainly used to measure the quality of the synchronization in dependence of the number of nodes and the coupling factor . Therein, every node is in the transmission range of each other.

The simulation results based on this topology give a good overview on the impact of different coupling factors. According to the diagrams in Figure 5, the time to sync decreases with an increasing coupling factor . If the factor is too big, then synchronicity will not be achieved. This effect is due to the upper bound of the coupling factor. Table 3 summarizes the upper bound for different values of with respect to Corollary 3.10. The stronger bound of Theorem 3.11 was neglected due to the fact that most network simulations using the weak upper bound with a random inital configuration have achieved synchronicity. The time to sync calculated in Table 3 are based on Theorem 3.15 for an initial maximum phase difference of . Note that a constant offset of was added so that the values can be compared with the simulation result, because the simulator declares a set of nodes synchronized only if they are within some precision for at least consecutive periods. The high time to sync bar in Figure 5(a) with and a number of 20 nodes comes from the fact that the coupling factor was too high.

Table 3 Calculated bounds for the coupling factor and the time to sync.
Figure 5
figure 5

The time to sync and the group spread for an all-to-all topology experiment in dependence of the network diameter and different coupling factors. The solid bars in (b) represent the 50th percentile group spread, while the error bars correspond to the 90th percentile.Time to sync diagramGroup spread diagram

If we assume that the rate calibration scheme reduces the worst case drift to , then based on the parameters from Table 2 and Theorem 3.2, the worst case precision for equals . Note that without the rate calibration scheme, we would have . The group spread diagram complies with our theoretical result that the precision does not depend on if maintains the lower bound. Note that the simulations are based on a realistic radio model considering message collisions and transmission strength as well as the backoff scheme of the MAC layer. Therefore, the results show that if all network parameters (e.g., transmission delay, maximum drift rate, etc.) are correctly defined, then the worst case precision is maintained most of the time. However, there may exist outliers in the case of an omission failure of the fastest node, because this algorithm does not allow a node to adjust its clock backward.

The Multihop Topology

This communication topology is the most important one, because in reality many sensor networks are based on a source-to-sink communication topology with a communication path consisting several hops. The simplest multihop scenario is a network comprising nodes, which are ordered in a chain and can only communicate with the immediate neighbors. We further call the chain size network diameter. Such a network with a big network diameter is often very problematic to synchronize, because every hop involves a communication delay which degrades the overall synchronization precision. Therefore, an estimation for the achievable precision with respect to Theorem 3.2 in a network with the network diameter is about . Our solution is based on grouped multihop networks. Therein, the nodes are replaced with groups comprising several nodes in all-to-all topology which all have a bidirectional communication link to all nodes in the immediate neighboring groups. Figure 6 is a snapshot of a running simulation regarding the grouped multihop topology with JProwler. Therein, a dot represents a node and an arrow visualizes that a node is currently transmitting data. The gray scale visualizes the deviation. Note that all grouped multihop topologies treated in our experiments have the same network diameter of 10 hops but vary in the group size.

Figure 6
figure 6

A simulation snapshot of a grouped multihop topology with a network diameter of 10 and a group size of 3 nodes.

The diagrams in Figure 7 shows the time to sync and the group spread in dependence of a different group size and coupling factor. The network diameter is always 10. These diagrams lead to the result that the precision increases with a bigger group size. This effect is caused by the better information about the interval drift due to the increased number of neighboring nodes. If a node has more neighboring nodes, then the node receives more information about the clock drift and can more precisely calibrate the interval duration, which also improves the synchronization precision. However, it is also important to have a preferably small coupling factor. On the one hand this increases the time to sync, but on the other hand this also increases the possibility that the network achieves synchronicity. To sum up, it is difficult to find the best parameter settings for a given multihop network, but it is definitely a good choice to have a group size of more than one node. This also increases the dependability and availability of the network.

Figure 7
figure 7

The time to sync and the group spread for a multihop topology with a network diameter of 10 in dependence of the cluster size and different coupling factors. Note that the number of nodes must be divided by 10 to get the group size. The solid bars in (b) represent the 50th percentile group spread, while the error bars correspond to the 90th percentile.Time to sync diagramGroup spread diagram

5. Evaluation on Real Hardware

The simulation results provide a good basis for several parameter estimations in order to optimize the synchronization precision for different network topologies. However, the simulator does not support information about power consumption and further never fully reflects the real world scenario. For this reason, we implemented and evaluated our distributed algorithm in combination with the time-triggered approach on real hardware.

5.1. Testbed Description

The testbed is based on Atmel's demonstration kit ATAVRRZ200 [9]. The kit features two component boards: The Display Board and the Remote Controller Board (RCB)s. The Display Board is based on an Atmega128 controller and features an LCD-module. This board also works as a docking station for programming the RCBs. The RCBs therefore are based on an Atmega1281 controller and contain an AT86RF230 (2450 MHz band) radio transceiver. The implementation of our approach is done with the AVR Z- 802.15.4/ZigBee nodes. The information about the synchronization precision is gathered via the established TDMA scheme. Therefore, beside energy savings, the time-triggered approach also serves as an evaluation protocol. For this, we used a modified version of the TTP/A protocol [10].

The synchronization algorithm was implemented analogously to the implementation in JProwler. A simple RC-oscillator-based 16-bit timer was used to generate the synchronization interval with a duration of one second. The only differences with respect to the parameter choices in Table 2 are a higher granularity of the virtual clock (31250 ticks/period) and a higher transmission delay of 896 microseconds.

We modified the initial settings of the MAC sublayer, that is, the minimum backoff exponent, to reduce the transmission delay. We further assumed that it is better to omit a message than to transmit postponed synchronization data since the underlying MAC layer does not support MAC timestamping. However, several measurements have shown that regardless of this configuration, some messages are still transmitted with a high delay. Therefore the tradeoff between the probability of an omission failure and a high delay jitter with respect to the precision degradation has to be chosen. In detail, whereas high delay jitter only degrades the deterministic worst case precision, omission failures may mostly provide a better precision, though it is more indeterministic and therefore results in a worse worst case precision. Note that omission failures are especially a problem in the case they occur at the fastest node and may degrade the worst case precision by at about , where denotes the maximum number of consecutive omission failures at the same node. In our implementation, every node is configured as a Full-Function Device (FFD) and no association process is required. This necessitates the use of individual predefined 16-bit short addresses for each node.

5.2. Experiments and Results

The evaluation metrics for the testbed experiments are similar to the one used for the simulation experiments. In order to observe the relative deviations over all nodes in a network, we decided that every node transmits its own maximum absolute deviation of the last period to a central evaluation node, which then calculates the maximum over all received deviations. These values over several minutes are then taken to compute the 50th and the 90th percentile group spread.

To be able to compare the testbed results with the simulator results, the parameter configuration has to be the same as used in the simulator experiments. Since in reality it is not possible to speedup the time, we reduced the experiment time to 720 periods.

5.2.1. Testbed Results

The All-to-All Topology

For the all-to-all topology experiment, we again measured the group spread in dependence of several coupling factors.

Table 4 contains the simulation results and the testbed results with the same network configuration comprising 5 nodes in all-to-all topology. This demonstrates that the results are similar. For instance, the time to sync and the 50th percentile group spread of the testbed system are mostly better than the simulation results. Furthermore, the calculated worst case precision of is also maintained by the 90th percentile group spread, even though it is worse compared to the simulation results. The outliers come from the fact that the delay jitter of the communication delay due to the MAC stack was sometimes higher than expected. Note that we have already compensated the constant delays in the implementation. However, there may exist other delays which we have not considered. Simulation experiments have shown that a higher transmission delay or a higher delay jitter are the major reasons for the precision degradation due to the state correction algorithm and hardly affect the rate calibration scheme. Furthermore, the worst case precision may also result from an omission failure from a synchronization message of the fastest node. The results from an all-to-all topology comprising 9 nodes are comparable with those denoted in Table 4. Note that only a maximum number of 9 nodes were available for our experiments.

Table 4 Comparison of several parameters in dependence of different coupling factors. The values between the brackets correspond to the simulation results with the same all-to-all network configuration comprising 5 nodes.

The Multihop Topology

The results from the multihop experiments are important in order to get an overview about the limits of our synchronization approach. The first scenario was made up of 5 nodes ordered in a chain, where a node can only communicate with the immediate neighbors. The only difference between the simulation and the testbed environment is that the testbed environment does not have an omniscient observer, which is able to continuously measure the synchronization deviation among all nodes. For this reason, we decided to measure the time difference between the edge nodes with the aid of an oscilloscope, whereas each node periodically sets an output pin at the same phase state for a short time. Unfortunately, these measurements cannot be gathered automatically over several periods. Therefore, we manually made snapshots over several minutes and took those diagrams, which display the biggest time deviation. To simulate the multihop network, we have simply implemented a message filter.

The results show that the precision of a realistic multihop network with 4 hops is about 3  milliseconds and even maintains the calculated worst case precision of a fully connected network multiplied by . Interestingly, the simulation of the same network with a configured delay jitter of 1250 microseconds leads to the same result. To get an overview of the precision degradation with respect to the network diameter, another multihop experiment with 9 nodes was performed. The measurement setup is similar to the previous multihop network. The measurement results show a maximum deviation between the edge nodes of up to 14  milliseconds and is better than we have observed during the simulation where synchronicity was sometimes not achieved at all. Note that the measured worst case precision is again maintained by the calculated worst case precsion for a fully connected network multiplied by . However, such a high deviation was measured very seldom. In summary, the worst case precision of our synchronization algorithm degrades by at most the worst case precision for the corresponding fully connected network with each hop. Note that with respect to [3], the results can be compared with the simulation results regarding a regular grid topology containing nodes. Therein, they have measured a 90th percentile group spread of about 10  milliseconds. Compared to their results, our worst case deviation of 14  milliseconds is not so digressive.

We further measured the behavior of a grouped multihop network as shown in Figure 8 comprising 3 groups with a group size of 2 and additional two-edge nodes which have a communication link to the corresponding two edge groups. This was the only acceptable configuration with a number of 9 available nodes. The maximum deviation between the edge nodes measured over about 10 minutes was 7  milliseconds. In the simulator we had to configure a delay jitter of 7  milliseconds or in the case of no delay jitter an uncompensated additional transmission delay of 1.5  milliseconds to get the same result. Therefore, it is highly likely that beside the delay jitter, the testbed environment additionally suffers from a longer communication delay, which was not regarded in the current implementation. Unfortunately, due to the limited number of nodes, we were not able to make further experiments with a bigger group size. Thus we must rely on the simulation experiments which show us that a bigger group size usually results in a better synchronization precision.

Figure 8
figure 8

The measuring setup for visualizing the deviation between the edge nodes of a quasigrouped multihop network comprising 3 clusters with a cluster size of 2 and 2 edge nodes.

5.2.2. Energy Measurements

The energy consumption plays an important role for the device lifetime in battery-powered wireless networks, especially if no infrastructure is available. All RCBs are battery-powered with two 1.5 V AAA-batteries and thus have a voltage supply of 3 Volt. In order to get the device lifetime, the average power consumption is compared with the electrical energy of the batteries, which we assume to be about . The lifetime in hours is the ratio and can be reduced to the equivalent formula , where corresponds to the battery charge, denoted in mAh. If so, then the formula determines the device lifetime in hours and defines the average current consumption.

For further energy calculations, the current consumption during a complete period can be classified into four parts and were measured by the use of an oscilloscope and a current shunt resistor. These are the firing time, idle time, execution time, and transmission time.

The firing time results from the message staggering delay and corresponds to the interval where the transceiver is enabled and the nodes are allowed to transmit their synchronization messages. In our test application, this interval is also called part 1 of the firing time and equals the duration of 50  milliseconds. The consumed current during this time is about 20 mA. Note that there exists an interval between the end of the firing time and the period end which acts as a safety margin in the case a node starts a transmission exactly at the end of the firing time. If so, the transceiver must be enabled as long as the transmission continuous. This safety margin is further named part 2 of the firing time and consumes a current of 24 mA.

The idle time is the part, where the current drops to a minimum. The reason for the small current lies in the fact that the device is dormant, that is, the transceiver is disabled. With our ZigBee nodes, we measured a current of about 6.2 mA.

The execution time is the time where the RCB device executes some code. This is always the case at the end of each period, where the device has to execute the RFA. Other execution tasks must be configured in the RODL file. In the test application used for the energy measurement, the RODL file only contains one execution slot in each period. This task is responsible for data preparation. We measured a current of about 11 mA for a duration of 1  milliseconds. This energy part mainly depends on the amount of code of the executed tasks.

The transmission time corresponds to the time, where the device is transmitting data. Normally, this is always the case when the RCB wants to broadcast its synchronization message during the firing time. Other transmissions during the period must be registered in the RODL file. We further measure the energy consumption of a registered transmission slot, which is used to broadcast test data. We measured a current consumption of about over a time of . Note that this duration does not equal the real transmission time of about . This comes from the fact, that the transceiver requires some time for the startup phase and that the Carrier Sense Multiple Access with Colission Avoidance (CSMA/CA) scheme at the MAC layer may also cause some delay although the backoff scheme was disabled.

Table 5 sums up all different energy consumers with the corresponding battery discharge in mAs. In the case the period duration is exactly one second, the values defined in this table results in a battery discharge per cycle of about . Thus, the average current consumption also equals . Assuming that our batteries deliver a charge of about , then the resulting lifetime () in hours can be calculated as follows:

(13)

The configured duty-cycle for this result is about 7 percent, but could be reduced by increasing the period time. If we assume that the time slices of the other consumers are for the most part constant, then a larger period time induces also a larger idle time. Note that a larger period time usually also entails a degradation in precision. The duty-cycle is defined to be the ratio between the sum of the two firing times (), the execution time (), and the transmission time () and the complete period time (). Thus, the duty-cycle is hereinafter denoted by DC and corresponds to the equation

Table 5 Listing of the major energy consumers and their corresponding battery discharge. The energy calculation assumes a working voltage of 3 Volt.

To follow up on our special energy example, we further want to calculate the improvement of the lifetime with respect to the lifetime as if no synchronization approach would be established, that is, the duty-cycle equals 100%. In that case, the average current consumption equals 23.752 mA. Consequently, a duty-cycle of 100% corresponds to a lifetime of about 50.5 hours. A comparison among the lifetime with a duty-cycle of 100% and the achieved lifetime with our configured duty-cycle of about 7% shows that the synchronization approach improves the lifetime by at least a factor of three.

To illustrate the dependence between the lifetime improvement and the period time, we introduce the improvement factor, denoted by . This factor is the ratio between the improved lifetime and the reference lifetime corresponding to a duty-cycle of 100% at the same period of time. The improvement factor equals and as visualized in Figure 9.

Figure 9
figure 9

The lifetime improvement as a function of the period time.

6. Discussion

The simulator and testbed results have shown that the simulator provides promising results which are mostly similar to the testbed results. Several experiments have shown that the rate calibration works well in the presence of high delay jitter and transmission delay. However, the results from the multihop topology experiments in the testbed system are worse compared to the simulation results. This comes from the fact that our testbed environment suffers from an unexpected delay jitter and further an additional communication delay which was not regarded in the state correction algorithm.

These conditions and the presence of asynchronous communication patterns in realistic sensor networks regarding highly multihop scenarios with high requirements on availability and dependability makes the employment with low-cost nodes in such an environment usually improper. Otherwise, the proposed grouped multihop topology with a small network diameter allows the forwarding more reliable than it would be in a standard adhoc network. For instance, the nodes in such a single group could be statically configured via the RODL file such that they all forward the same message to the neighboring groups in different slots, but within the same period. In other words, such a group comprises several nodes which have the same functionality (e.g., sensor measurement, relaying, sensor fusion, computation, actuator control in combination with Triple Modular Redundancy (TMR). If so, then a single node failure has no impact on the network behavior, since there exist no dedicated nodes. Thus, graceful degradation is the main advantage of this protocol.

The time-triggered approach a priori requires a static communication plan (the RODL file) for each node in the network. Therefore, the network and possible multihop scenarios must be analyzed before the RODL file can be created. This requires additional work and expenses compared to other protocols. However, the inherent advantage is that the deployed communication schedule can be perfectly adapted to the current known static network topology. Therefore, multihop routing algorithms are no longer necessary, since the routing is done implicitly during the configuration process of the RODL files. Though, a change in the network topology may also involve a reconfiguration of all RODL files.

We have shown that our approach provides a duty-cycle of about 7% and thus reduces the energy consumption in a simple network by at least a factor of three. Note that the duty-cycle heavily depends on the synchronization precision. Since we consider the implementation on low-cost nodes by the use of an off-the-shelf MAC layer which does not support MAC timestamping, our synchronization precision is limited to the MAC specific delay jitter. Other established protocols such as S-MAC or the slotted variant of IEEE 802.15.4 use an alternative or more sophisticated clock synchronization approach and thus usually achieve a duty-cycle of lower than 1%. However, such protocols usually demand a dedicated master node (and maybe additional backup nodes) which represents a single point of failure. In contrast, our approach has the inherent advantage that it does not require any dedicated nodes. Furthermore, a higher synchronization precision and thus a lower duty-cycle can indeed be achieved by more accurate but also more expensive external crystal oscillators. As a result, the tradeoff lies between the costs of a node and the power consumption.

7. Related Work

Literature on biologically inspired Firefly synchronization can be categorized into papers about the biological/mathemathical models of the Firefly approach [1, 4, 11], work that treats the biologically inspired Firefly synchronization model for realizing the communication in sensor networks [3, 12] and architectures and evaluations that apply the Firefly synchronization model to establish a time-triggered service [13].

Werner-Allen et al. [3] present the Reachback Firefly Algorithm, which is well-suited for the implementation in sensor networks. The algorithm was simulated with TOSSIM over several parameter choices (e.g., different node topologies, coupling strength, and network diameters). values, and network diameter.

In [14], the authors introduce a time advance strategy based on the PCO model, which takes the delays in wireless systems into account. Similarly to [3], they incorporate the fact that a node cannot transmit and receive at the same time. The time advance strategy presented in this paper compensates the delay, which is responsible for the lower bound of the accuracy. This delay depends on the dominant transmission and decoding delay. The compensation is done by delaying the transmission of the synchronization messages.

8. Conclusion and Outlook

An alternative synchronization algorithm based on the synchronous flashing of fireflies was introduced in order to establish a global timebase that supports the implementation of a time-triggered approach based on the off-the-shelf MAC layer IEEE 802.15.4. This allows a collision-free communication and a reduction of power consumption by at least a factor of 3. The synchronization is based on a self-organized principle with a simple calculation and provides complete scalability and graceful degradation. This is beneficial for the use in sensor networks. This has the inherent advantage, that no dedicated synchronization node is required and thus there exist no single point of failure. Furthermore, the additional rate calibration scheme allows a longer resynchronization interval and the use of cheap oscillators with high drift rates, which are usually featured in low-cost nodes.

The approach has been evaluated by simulation and an implementation in a real testbed environment. Several experiments based on an all-to-all topology have shown that it is possible to achieve a synchronization precision which is lower than 1  milliseconds. Unfortunately, the testbed system suffered from an unexpected delay jitter and an additional communication delay. For this reason, the testbed results considering highly multihop topologies were worse compared to the simulation results with a low delay jitter.

Future work will rely on the improvement of the synchronization precision by the use of an alternative MAC-Stack supporting MAC timestamping. Furthermore, we want to extend the algorithm in order to be resilient to compromised nodes which may behave as an adversary trying to destroy the synchronization. Last but not least, we want to compare the results of our approach with other synchronization schmes designed for wireless networks.

Algorithm 1: E-RFA: code for , .

Initially eventset

upon event do    preopened transmission of sync-message

trigger        broadcast current phase to all neighbors

upon event from do    received sync-message

if then    check if real firing-event is within the period

add to eventset

upon event do threshold reached

    clean up

for each event eventset in increasing order do

if and then     check time consistency

    calculate phase advance

Apply recheckback response

    Calculate firing offset

eventset

References

  1. Buck J: Synchronous rhythmic flashing of fireflies. II. The Quarterly Review of Biology 1988,63(3):265-289. 10.1086/415929

    Article  MathSciNet  Google Scholar 

  2. Elmenreich W, Bauer G, Kopetz H: The time-triggered paradigm. Proceedings of the Workshop on Time-Triggered and Real-Time Communication, December 2003, Manno, Switzerland

    Google Scholar 

  3. Werner-Allen G, Tewari G, Patel A, Welsh M, Nagpal R: Firefly-inspired sensor network synchronicity with realistic radio effects. Proceedings of the 3rd International Conference on Embedded Networked Sensor Systems (SenSys '05), November 2005, San Diego, Calif, USA 142-153.

    Chapter  Google Scholar 

  4. Mirollo RE, Strogatz SH: Synchronization of pulse-coupled biological oscillators. SIAM Journal on Applied Mathematics 1990,50(6):1645-1662. 10.1137/0150098

    Article  MATH  MathSciNet  Google Scholar 

  5. IEEE Computer Society : IEEE Standard for Information technology—telecommunication and information exchange between systems—local and metropolitan area networks—specific requirements—part 15.4: Wireless Medium Access Control (MAC) and Physical Layer (PHY) Specifications for Low-Rate Wireless Personal Area Networks (LR-WPANs). Institute of Electrical and Electronics Engineers, September 2003

  6. Lundelius J, Lynch N: An upper and lower bound for clock synchronization. Information and Control 1984,62(1):190-204.

    Article  MATH  MathSciNet  Google Scholar 

  7. Ye W, Heidemann J, Estrin D: Medium access control with coordinated adaptive sleeping for wireless sensor networks. IEEE/ACM Transactions on Networking 2004,12(3):493-506. 10.1109/TNET.2004.828953

    Article  Google Scholar 

  8. Simon G, Volgyesi P, Maroti M, Ledeczi A: Simulation-based optimization of communication protocols for large-scale wireless sensor networks. Proceedings of the IEEE Aerospace Conference, March 2003, Big Sky, Mont, USA 3: 1339-1346.

    Google Scholar 

  9. Atmel Corporation : ATAVRRZ200 Demonstration Kit AT86RF230 (2450 MHz band) Radio Transceiver User Guide. Document No. 5183AZIGB12/07/06, July 2006

  10. Eberle S, Ebner C, Elmenreich W, et al.: Specification of the TTP/A protocol. In Research Report. Institut für Technische Informatik, Technische Universität Wien, Vienna, Austria; 2002. Version 2.00

    Google Scholar 

  11. Buck J, Buck E: Synchronous fireflies. Scientific American 1976,234(5):74-85. 10.1038/scientificamerican0576-74

    Article  Google Scholar 

  12. Tyrell A, Auer G, Bettstetter C: Biologically inspired synchronization for wireless networks. In Studies in Computational Intelligence. Volume 69. Edited by: Dressler F, Carreras I. Springer, Berlin, Germany; 2007:47-62. 10.1007/978-3-540-72693-7_3

    Google Scholar 

  13. Leidenfrost R, Elmenreich W: Establishing wireless time-triggered communication using a firefly clock synchronization approach. Proceedings of the 6th International Workshop on Intelligent Solutions in Embedded Systems (WISES '08), July 2008, Regensburg, Germany 1-18.

    Google Scholar 

  14. Tyrrell A, Auer G, Bettstetter C: Firefly synchronization in ad hoc networks. Proceedings of the MiNEMA Workshop, February 2006, Leuven, Belgium

    Google Scholar 

Download references

Acknowledgments

This work was supported by the Austrian FWF project TTCAR under contract no. P18060-N04. The authors would like to thank Johannes Klinglmayr for constructive comments on an earlier version of this paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Robert Leidenfrost.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Leidenfrost, R., Elmenreich, W. Firefly Clock Synchronization in an 802.15.4 Wireless Network. J Embedded Systems 2009, 186406 (2009). https://doi.org/10.1155/2009/186406

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1155/2009/186406

Keywords