This paper considers a network where a node wishes to transmit a source message to a legitimate receiver in the presence of an eavesdropper. The transmitter secures its transmissions employing a sparse implementation of Random Linear Network Coding (RLNC). A tight approximation to the probability of the eavesdropper recovering the source message is provided. The proposed approximation applies to both the cases where transmissions occur without feedback or where the reliability of the feedback channel is impaired by an eavesdropper jamming the feedback channel. An optimization framework for minimizing the intercept probability by optimizing the sparsity of the RLNC is also presented. Results validate the proposed approximation and quantify the gain provided by our optimization over solutions where non-sparse RLNC is used.
An exponential increase in mobile video delivery will continue with the demand for higher resolution, multi-view and large-scale multicast video services. Novel fifth generation (5G) 3GPP New Radio (NR) standard will bring a number of new opportunities for optimizing video delivery across both 5G core and radio access networks. One of the promising approaches for video quality adaptation, throughput enhancement and erasure protection is the use of packet-level random linear network coding (RLNC). In this review paper, we discuss the integration of RLNC into the 5G NR standard, building upon the ideas and opportunities identified in 4G LTE. We explicitly identify and discuss in detail novel 5G NR features that provide support for RLNC-based video delivery in 5G, thus pointing out to the promising avenues for future research.
Connected and Autonomous Vehicles (CAVs) will play a crucial role in next-generation Cooperative Intelligent Transportation Systems (C-ITSs). Not only is the information exchange fundamental to improve road safety and efficiency, but it also paves the way to a wide spectrum of advanced ITS applications enhancing efficiency, mobility and accessibility. Highly dynamic network topologies and unpredictable wireless channel conditions entail numerous design challenges and open questions. In this paper, we address the beneficial interactions between CAVs and an ITS and propose a novel architecture design paradigm. Our solution can accommodate multi-layer applications over multiple Radio Access Technologies (RATs) and provide a smart configuration interface for enhancing the performance of each RAT.
We consider a lossy multicast network in which the reliability is provided by means of Random Linear Network Coding. Our goal is to characterise the performance of such network in terms of the probability that a source message is delivered to all destination nodes. Previous studies considered coding over large finite fields, small numbers of destination nodes or specific, often impractical, channel conditions. In contrast, we focus on a general problem, considering arbitrary field size and number of destination nodes, as well as a realistic channel. We propose a lower bound on the probability of successful delivery, which is more accurate than the approximation commonly used in the literature. In addition, we present a novel performance analysis of the systematic version of RLNC. The accuracy of the proposed performance framework is verified via extensive Monte Carlo simulations, where the impact of the network and code parameters are investigated. Specifically, we show that the mean square error of the bound for a ten-user network can be as low as 9E-5 for non-systematic RLNC.
Ultra-reliable Point-to-Multipoint (PtM) communications are expected to become pivotal in networks offering future dependable services for smart cities. In this regard, sparse Random Linear Network Coding (RLNC) techniques have been widely employed to provide an efficient way to improve the reliability of broadcast and multicast data streams. This paper addresses the pressing concern of providing a tight approximation to the probability of a user recovering a data stream protected by this kind of coding technique. In particular, by exploiting the Stein-Chen method, we provide a novel and general performance framework applicable to any combination of system and service parameters, such as finite field sizes, lengths of the data stream and level of sparsity. The deviation of the proposed approximation from Monte Carlo simulations is negligible, improving significantly on the state of the art performance bounds.
Connected and autonomous vehicles will play a pivotal role in future Intelligent Transportation Systems (ITSs) and smart cities, in general. High-speed and low-latency wireless communication links will allow municipalities to warn vehicles against safety hazards, as well as support cloud-driving solutions to drastically reduce traffic jams and air pollution. To achieve these goals, vehicles need to be equipped with a wide range of sensors generating and exchanging high rate data streams. Recently, millimeter wave (mmWave) techniques have been introduced as a means of fulfilling such high data rate requirements. In this paper, we model a highway communication network and characterize its fundamental link budget metrics. In particular, we specifically consider a network where vehicles are served by mmWave Base Stations (BSs) deployed alongside the road. To evaluate our highway network, we develop a new theoretical model that accounts for a typical scenario where heavy vehicles (such as buses and lorries) in slow lanes obstruct Line-of-Sight (LOS) paths of vehicles in fast lanes and, hence, act as blockages. Using tools from stochastic geometry, we derive approximations for the Signal-to-Interference-plus-Noise Ratio (SINR) outage probability, as well as the probability that a user achieves a target communication rate (rate coverage probability). Our analysis provides new design insights for mmWave highway communication networks. In considered highway scenarios, we show that reducing the horizontal beamwidth from 90° to 30° determines a minimal reduction in the SINR outage probability (namely, 4E-2 at maximum). Also, unlike bi-dimensional mmWave cellular networks, for small BS densities (namely, one BS every 500 m) it is still possible to achieve an SINR outage probability smaller than 0.2.
Characterization of the delay profile of systems employing random linear network coding is important for the reliable provision of broadcast services. Previous studies focused on network coding over large finite fields or developed Markov chains to model the delay distribution but did not look at the effect of transmission deadlines on the delay. In this work, we consider generations of source packets that are encoded and transmitted over the erasure broadcast channel. The transmission of packets associated to a generation is taken to be deadline-constrained, that is, the transmitter drops a generation and proceeds to the next one when a predetermined deadline expires. Closed-form expressions for the average number of required packet transmissions per generation are obtained in terms of the generation size, the field size, the erasure probability and the deadline choice. An upper bound on the average decoding delay, which is tighter than previous bounds found in the literature, is also derived. Analysis shows that the proposed framework can be used to fine-tune the system parameters and ascertain that neither insufficient nor excessive amounts of packets are sent over the broadcast channel.
Point-to-multipoint communications are expected to play a pivotal role in next-generation networks. This paper refers to a cellular system transmitting layered multicast services to a multicast group of users. Reliability of communications is ensured via different Random Linear Network Coding (RLNC) techniques. We deal with a fundamental problem: the computational complexity of the RLNC decoder. The higher the number of decoding operations is, the more the user's computational overhead grows and, consequently, the faster the battery of mobile devices drains. By referring to several sparse RLNC techniques, and without any assumption on the implementation of the RLNC decoder in use, we provide an efficient way to characterize the performance of users targeted by ultra-reliable layered multicast services. The proposed modeling allows to efficiently derive the average number of coded packet transmissions needed to recover one or more service layers. We design a convex resource allocation framework that allows to minimize the complexity of the RLNC decoder by jointly optimizing the transmission parameters and the sparsity of the code. The designed optimization framework also ensures service guarantees to predetermined fractions of users. The performance of the proposed optimization framework is then investigated in a LTE-A eMBMS network multicasting H.264/SVC video services.
This letter considers a network comprising a transmitter, which employs random linear network coding to encode a message, a legitimate receiver, which can recover the message if it gathers a sufficient number of linearly independent coded packets, and an eavesdropper. Closed-form expressions for the probability of the eavesdropper intercepting enough coded packets to recover the message are derived. Transmission with and without feedback is studied. Furthermore, an optimization model that minimizes the intercept probability under delay and reliability constraints is presented. Results validate the proposed analysis and quantify the secrecy gain offered by a feedback link from the legitimate receiver.
The explosive growth of content-on-the-move, such as video streaming to mobile devices, has propelled research on multimedia broadcast and multicast schemes. Multi-rate transmission strategies have been proposed as a means of delivering layered services to users experiencing different downlink channel conditions. In this paper, we consider Point-to-Multipoint layered service delivery across a generic cellular system and improve it by applying different random linear network coding approaches. We derive packet error probability expressions and use them as performance metrics in the formulation of resource allocation frameworks. The aim of these frameworks is both the optimization of the transmission scheme and the minimization of the number of broadcast packets on each downlink channel, while offering service guarantees to a predetermined fraction of users. As a case of study, our proposed frameworks are then adapted to the LTE-A standard and the eMBMS technology. We focus on the delivery of a video service based on the H.264/SVC standard and demonstrate the advantages of layered network coding over multi-rate transmission. Furthermore, we establish that the choice of both the network coding technique and resource allocation method play a critical role on the network footprint, and the quality of each received video layer.
Video service delivery over 3GPP Long Term Evolution-Advanced (LTE-A) networks is gaining momentum with the adoption of the evolved Multimedia Broadcast Multicast Service (eMBMS). In this correspondence, we address the challenge of optimizing the radio resource allocation process so that heterogeneous groups of users, according to their propagation conditions, can receive layered video streams at predefined and progressively decreasing set of service levels matched to respective user groups. A key aspect of the proposed system model is that video streams are delivered as eMBMS flows by using the random linear network coding principle. Furthermore, the transmission rate and network coding scheme of each eMBMS flow are jointly optimized. The simulation results show that the proposed strategy can exploit the user heterogeneity in order to optimize the allocated radio resources while achieving desired service levels for different user groups.
We propose a energy efficient resource allocation framework suitable for multicast service delivery over 3GPP's LTE-A SFN-eMBMS networks. A key aspect of the considered system model is that multicast communications are delivered according to the RLNC principle. The proposed optimization framework aims at minimizing the transmission energy associated with the delivery of a set of multicast flows. The goal is achieved by jointly optimizing the transmission power and RLNC scheme of each flow. Furthermore, we present an heuristic strategy that can efficiently find a good quality feasible solution of the presented resource allocation model.
This correspondence deals with the modelling and analysis of the resequencing delay in Time Division Duplexing communication systems which adopt the Selective Repeat Automatic Repeat-reQuest error control strategy. Under the assumption that packet misordering at the receiving end is induced by channel errors, the correspondence proposes an analytical approach based on the Absorbing Markov Chain theory in order to accurately predict the impact of the resequencing delay on the quality of the provided services. Numerical results, derived by means of computer simulations, are also given in order to validate the proposed analytical model.
In this letter, we propose two optimized multicast communication strategies based on the Network Coding principle which aim to significantly improve the performance in terms of the power cost and delivery delay associated to the transmission of the whole data flow. The proposed strategies are of special interest for service delivered in an unreliable mode. The reported numerical results clearly show that both the strategies achieve the aforementioned goals in comparison with the Random Linear Network Coding alternative.
This paper deals with the performance evaluation and optimization of an efficient Hybrid Automatic Repeat Request (HARQ) scheme suitable for applications delivered over lossy multicast communication channels. In particular, differently from previously investigated strategies, this paper proposes a Modified HARQ scheme based on the Symbol Combining principle (MHARQ-SC) where multiple copies of a same packet are consecutively transmitted at each transmission opportunity. By considering as performance metrics the mean packet delivery delay and energy consumption per information packet, this paper presents suitable performance evaluation and optimization strategies tailored for multicast communications. For the sake of comparisons, it has been also analysed, under the same operational conditions, the performance of different HARQ schemes optimized for multicast communications. Numerical results have been provided in order to validate the proposed performance evaluation and optimization approaches in the case of the MHARQ-SC scheme. An important result devised here is that the reported analytical results clearly highlights the performance gain of the proposed strategy in comparison with all the other considered alternatives.
The most recent trend in the Information and Communication Technology world is toward an ever growing demand of mobile heterogeneous services that imply the management of different quality of service requirements and priorities among different type of users. The long-term evolution (LTE)/LTE-advanced standards have been introduced aiming to cope with this challenge. In particular, the resource allocation problem in downlink needs to be carefully considered. Herein, a solution is proposed by resorting to a modified multidimensional multiple-choice knapsack problem modeling, leading to an efficient solution. The proposed algorithm is able to manage different traffic flows taking into account users priority, queues delay, and channel conditions achieving quasi-optimal performance results with a lower complexity. The numerical results show the effectiveness of the proposed solution with respect to other alternatives.
In this paper we investigate the performance advantages achieved by the use of the Symbol Combining (SC) approach in Random Linear Network Coding (RLNC) scheme for broadcast communications over lossy channels. In particular, the focus is on a modified RLNC scheme that makes use of repeated transmissions of each data symbol belonging to the same coded packet in order to implement the SC approach at the receiving ends. By considering as objective metrics the mean number of transmissions and energy consumption for each coded packet, two optimization procedures are proposed and compared in the paper. We considered a broadcast network model where an access point has to broadcast coded packets to a set of receiving nodes. In addition to that, the analysis presented in the paper is extended to broadcast communications in butterfly topology networks. For all the considered scenarios, the better behavior of the symbol combined RLNC scheme results clearly evident in comparison with the basic RLNC, without requiring additional implementation complexity at each receiving ends.
The last years have been characterized by an increasing interest in the grid and cloud computing that allow the implementation of high performance computing structures in a distributed way by exploiting multiple processing resources. The presence of mobile terminals has extended the paradigm to the so called pervasive grid networks, where multiple heterogeneous devices are interconnected to form a distributed computing resource. In such a scenario, there is the need of efficient techniques for providing reliable wireless connections among network nodes. This paper deals with the proposal of a suitable resource management scheme relying on a routing algorithm able to perform jointly the resource discovery and task scheduling for implementing an efficient pervasive grid infrastructure in a wireless ad hoc scenario. The proposed solutions have been considered within two different parallelization processing schemes, and their effectiveness has been verified by resorting to computer simulations.
Future Connected and Autonomous Vehicles (CAVs) will be equipped with a large set of sensors. The large amount of generated sensor data is expected to be exchanged with other CAVs and the road-side infrastructure. Both in Europe and the US, Dedicated Short Range Communications (DSRC) systems, based on the IEEE 802.11p Physical Layer, are key enabler for the communication among vehicles. Given the expected market penetration of connected vehicles, the licensed band of 75 MHz, dedicated to DSRC communications, is expected to become increasingly congested. In this paper, we investigate the performance of a vehicular communication system, operated over the unlicensed bands 2.4 GHz-2.5 GHz and 5.725 GHz5.875 GHz. Our experimental evaluation was carried out in a testing track in the centre of Bristol, UK and our system is a full-stack ETSI ITS-G5 implementation. Our performance investigation compares key communication metrics (e.g., packet delivery rate, received signal strength indicator) measured by operating our system over the licensed DSRC an the considered unlicensed bands. In particular, when operated over the 2.4 GHz2.5 GHz band, our system achieves comparable performance to the case when the DSRC band is used. On the other hand, as soon as the system, is operated over the 5.725 GHz-5.875 GHz band, the packet delivery rate is 30% smaller compared to the case when the DSRC band is employed. These findings prove that operating our system over unlicensed ISM bands is a viable option. During our experimental evaluation, we recorded all the generated network interactions and the complete data set has been publicly available.
Connected and Automated Vehicles (CAVs) are expected to constantly interact with a network of processing nodes installed in secure cabinets located at the side of the road -- thus, forming Fog Computing-based infrastructure for Intelligent Transportation Systems (ITSs). Future city-scale ITS services will heavily rely upon the sensor data regularly off-loaded by each CAV on the Fog Computing network. Due to the broadcast nature of the medium, CAVs' communications can be vulnerable to eavesdropping. This paper proposes a novel data offloading approach where the Random Linear Network Coding (RLNC) principle is used to ensure the probability of an eavesdropper to recover relevant portions of sensor data is minimized. Our preliminary results confirm the effectiveness of our approach when operated in a large-scale ITS networks.
Future Connected and Automated Vehicles (CAVs) will be supervised by cloud-based systems overseeing the overall security and orchestrating traffic flows. Such systems rely on data collected from CAVs across the whole city operational area. This paper develops a Fog Computing-based infrastructure for future Intelligent Transportation Systems (ITSs) enabling an agile and reliable off-load of CAV data. Since CAVs are expected to generate large quantities of data, it is not feasible to assume data off-loading to be completed while a CAV is in the proximity of a single Road-Side Unit (RSU). CAVs are expected to be in the range of an RSU only for a limited amount of time, necessitating data reconciliation across different RSUs, if traditional approaches to data off-load were to be used. To this end, this paper proposes an agile Fog Computing infrastructure, which interconnects all the RSUs so that the data reconciliation is solved efficiently as a by-product of deploying the Random Linear Network Coding (RLNC) technique. Our numerical results confirm the feasibility of our solution and show its effectiveness when operated in a large-scale urban testbed.
Millimeter Waves (mmWaves) will play a pivotal role in the next-generation of Intelligent Transportation Systems (ITSs). However, in deep urban environments, sensitivity to blockages creates the need for more sophisticated network planning. In this paper, we present an agile strategy for deploying road-side nodes in a dense city scenario. In our system model, we consider strict Quality-of-Service (QoS) constraints (e.g. high throughput, low latency) that are typical of ITS applications. Our approach is scalable, insofar that takes into account the unique road and building shapes of each city, performing well for both regular and irregular city layouts. It allows us not only to achieve the required QoS constraints but it also provides up to 50% reduction in the number of nodes required, compared to existing deployment solutions.
The field of parallel network simulation frameworks is evolving at a great pace. That is also because of the growth of Intelligent Transportation Systems (ITS) and the necessity for cost-effective large-scale trials. In this contribution, we will focus on the INET Framework and how we re-factor its single-thread code to make it run in a multi-thread fashion. Our parallel version of the INET Framework can significantly reduce the computation time in city-scale scenarios, and it is completely transparent to the user. When tested in different configurations, our version of INET ensures a reduction in the computation time of up to 43%.
Connected Autonomous Vehicles (CAVs) are pitched as drivers of rapid growth for a future of driverless, safe and efficient transportation. However, CAV use may face challenges as their acceptance is intertwined with individuals' intention to use innovative technologies and the way they perceive related risks. In this paper, we present preliminary results of a pilot study regarding individuals' intention to use this new technological platform of connected autonomous vehicles. We are concerned in particular with perceptions of risk due to the collection and processing of personal data for the delivery of effective services. We examine whether perceived privacy risks, perceived ease of use and cultural bias affect individuals' decisions to adopt CAVs as a mobility option. We have preliminarily concluded that different cultural types of individuals should be addressed in different ways to achieve broad adoption of connected autonomous vehicles.
As we move towards autonomous vehicles, a reliable Vehicle-to-Everything (V2X) communication framework becomes of paramount importance. In this paper we present the development and the performance evaluation of a real-world vehicular networking testbed. Our testbed, deployed in the heart of the City of Bristol, UK, is able to exchange sensor data in a V2X manner. We will describe the testbed architecture and its operational modes. Then, we will provide some insight pertaining the firmware operating on the network devices. The system performance has been evaluated under a series of large-scale field trials, which have proven how our solution represents a low-cost high-quality framework for V2X communications. Our system managed to achieve high packet delivery ratios under different scenarios (urban, rural, highway) and for different locations around the city. We have also identified the instability of the packet transmission rate while using single-core devices, and we present some future directions that will address that.
Connected and Autonomous Vehicles (CAVs) require continuous access to sensory data to perform complex high-speed maneuvers and advanced trajectory planning. High priority CAVs are particularly reliant on extended perception horizon facilitated by sensory data exchange between CAVs. Existing technologies such as the Dedicated Short Range Communications (DSRC) are ill-equipped to provide advanced cooperative perception service. This creates the need for more sophisticated technologies such as the 5G Millimetre-Waves (mmWaves). In this work, we propose a distributed Vehicle-to-Vehicle (V2V) mmWaves association scheme operating in a heterogeneous manner. Our system utilises the information exchanged within the DSRC frequency band to bootstrap the best CAV pairs formation. Using a Stable Fixtures Matching Game, we form V2V multipoint-to-multipoint links. Compared to more traditional point-to-point links, our system provides almost twice as much sensory data exchange capacity for high priority CAVs while doubling the mmWaves channel utilisation for all the vehicles in the network.
Gigabit-per-second connectivity among vehicles is expected to be a key enabling technology for sensor information sharing, in turn, resulting in safer Intelligent Transportation Systems (ITSs). Recently proposed millimeter-wave (mmWave) systems appear to be the only solution capable of meeting the data rate demand imposed by future ITS services. In this poster, we assess the performance of a mmWave device-to-device (D2D) vehicular network by investigating the impact of system and communication parameters on end-users.
Computer simulations and real-world car trials are essential to investigate the performance of Vehicle-to-Everything (V2X) networks. However, simulations are imperfect models of the physical reality and can be trusted only when they indicate agreement with the real-world. On the other hand, trials lack reproducibility and are subject to uncertainties and errors. In this paper, we will illustrate a case study where the interrelationship between trials, simulation, and the reality-of-interest is presented. Results are then compared in a holistic fashion. Our study will describe the procedure followed to macroscopically calibrate a full-stack network simulator to conduct high-fidelity full-stack computer simulations.
The successful deployment of safe and trustworthy Connected and Autonomous Vehicles (CAVs) will highly depend on the ability to devise robust and effective security solutions to resist sophisticated cyber attacks and patch up critical vul- nerabilities. Pseudonym Public Key Infrastructure (PPKI) is a promising approach to secure vehicular networks as well as ensure data and location privacy, concealing the vehicles’ real identities. Nevertheless, pseudonym distribution and management affect PPKI scalability due to the significant number of digital certificates required by a single vehicle. In this paper, we focus on the certificate revocation process and propose a versatile and low-complexity framework to facilitate the distribution of the Certificate Revocation Lists (CRL) issued by the Certification Authority (CA). CRL compression is achieved through optimized Bloom filters, which guarantee a considerable overhead reduction with a configurable rate of false positives. Our results show that the distribution of compressed CRLs can significantly enhance the system scalability without increasing the complexity of the revocation process.
Millimetre Waves (mmWave) systems have the po- tential of enabling multi-gigabit-per-second communications in future Intelligent Transportation Systems (ITSs). Unfortunately, because of the increased vehicular mobility, they require fre- quent antenna beam realignments - thus significantly increasing the in-band Beamforming (BF) overhead. In this paper, we propose Smart Motion-prediction Beam Alignment (SAMBA), a MAC-layer algorithm that exploits the information broadcast via DSRC beacons by all vehicles. Based on these information, overhead-free BF is achieved by estimating the position of the vehicle and predicting its motion. Moreover, adapting the beamwidth with respect to the estimated position can further enhance the performance. Our investigation shows that SAMBA outperforms the IEEE 802.11ad BF strategy, increasing the data rate by more than twice for sparse vehicle density while enhancing the network throughput proportionally to the number of vehicles. Furthermore, SAMBA was proven to be more efficient compared to legacy BF algorithm under highly dynamic vehicular environments and hence, a viable solution for future ITS services.
Intelligent Transportation Systems (ITSs) require ultra-low end-to-end delays and multi-gigabit-per-second data transmission. Millimetre Waves (mmWaves) communications can fulfil these requirements. However, the increased mobility of Connected and Autonomous Vehicles (CAVs), requires frequent beamforming - thus introducing increased overhead. In this paper, a new beamforming algorithm is proposed able to achieve overhead-free beamforming training. Leveraging from the CAVs sensory data, broadcast with Dedicated Short Range Communications (DSRC) beacons, the position and the motion of a CAV can be estimated and beamform accordingly. To minimise the position errors, an analysis of the distinct error components was presented. The network performance is further enhanced by adapting the antenna beamwidth with respect to the position error. Our algorithm outperforms the legacy IEEE 802.11ad approach proving it a viable solution for the future ITS applications and services.
Obtaining high quality sensor information is critical in vehicular emergencies. However, existing standards such as IEEE 802.11p/DSRC and LTE-A cannot support either the required data rates or the latency requirements. One solution to this problem is for municipalities to invest in dedicated base stations to ensure that drivers have the information they need to make safe decisions in or near accidents. In this paper we further propose that these municipality-owned base stations form a Single Frequency Network (SFN). In order to ensure that transmissions are reliable, we derive tight bounds on the outage probability when the SFN is overlaid on an existing cellular network. Using our bounds, we propose a transmission power allocation algorithm. We show that our power allocation model can reduce the total instantaneous SFN transmission power up to $20$ times compared to a static uniform power allocation solution, for the considered scenarios. The result is particularly important when base stations rely on an off-grid power source (i.e., batteries).
In this paper, we analyze the performance of a single-relay network in which the reliability is provided by means of Random Linear Network Coding (RLNC). We consider a scenario when both source and relay nodes can encode packets. Unlike the traditional approach to relay networks, we introduce a passive relay mode, in which the relay node simply retransmits collected packets in case it cannot decode them. In contrast with the previous studies, we derive a novel theoretical framework for the performance characterization of the considered relay network. We extend our analysis to a more general scenario, in which coding coefficients are generated from non-binary fields. The theoretical results are verified using simulation, for both binary and non-binary fields. It is also shown that the passive relay mode significantly improves the performance compared with the active-only case, offering an up to two-fold gain in terms of the decoding probability. The proposed framework can be used as a building block for the analysis of more complex network topologies.
In the near future, the delivery of multimedia multicast services over next-generation networks is likely to become one of the main pillars of future cellular networks. In this extended abstract, we address the issue of efficiently multicasting layered video services by defining a novel optimization paradigm that is based on an Unequal Error Protection implementation of Random Linear Network Coding, and aims to ensure target service coverages by using a limited amount of radio resources.
Long Term Evolution-Advanced (LTE-A) and the evolved Multimedia Broadcast Multicast System (eMBMS) are the most promising technologies for the delivery of highly bandwidth demanding applications. In this paper we propose a green resource allocation strategy for the delivery of layered video streams to users with different propagation conditions. The goal of the proposed model is to minimize the user energy consumption. That goal is achieved by minimizing the time required by each user to receive the broadcast data via an efficient power transmission allocation model. A key point in our system model is that the reliability of layered video communications is ensured by means of the Random Linear Network Coding (RLNC) approach. Analytical results show that the proposed resource allocation model ensures the desired quality of service constraints, while the user energy footprint is significantly reduced.
Delivery of multicast video services over fourth generation (4G) networks such as 3GPP Long Term Evolution-Advanced (LTE-A) is gaining momentum. In this paper, we address the issue of efficiently multicasting layered video services by defining a novel resource allocation framework that aims to maximize the service coverage whilst keeping the radio resource footprint low. A key point in the proposed system mode is that the reliability of multicast video services is ensured by means of an Unequal Error Protection implementation of the Network Coding (UEP-NC) scheme. In addition, both the communication parameters and the UEP-NC scheme are jointly optimized by the proposed resource allocation framework. Numerical results show that the proposed allocation framework can significantly increase the service coverage when compared to a conventional Multi-rate Transmission (MrT) strategy.
We consider binary systematic network codes and investigate their capability of decoding a source message either in full or in part. We carry out a probability analysis, derive closed-form expressions for the decoding probability and show that systematic network coding outperforms conventional network coding. We also develop an algorithm based on Gaussian elimination that allows progressive decoding of source packets. Simulation results show that the proposed decoding algorithm can achieve the theoretical optimal performance. Furthermore, we demonstrate that systematic network codes equipped with the proposed algorithm are good candidates for progressive packet recovery owing to their overall decoding delay characteristics.
Fountain codes are gradually being incorporated into broadcast technologies, such as multimedia streaming for 4G mobile communications. In this paper, we investigate the capability of existing fountain-coded schemes to recover a fraction of the source data at an early stage and progressively retrieve the remaining source packets as additional coded packets arrive. To this end, we propose a practical Gaussian elimination decoder, which can recover source packets "on-the-fly". Furthermore, we introduce a framework for the assessment of progressive packet recovery, we carry out a performance comparison of the investigated schemes and we discuss the advantages and drawbacks of each scheme.
In this paper we propose couple power allocation strategies for layered video services delivery over evolved Multimedia Multicast/Broadcast Service (eMBMS) networks. The proposed allocations aim at reducing the power consumption of the eNodeB (eNB) and improving the user quality of experience characterizing the delivered eMBMS flows. We consider multiple challenging scenarios which differ by: (i) the number of eNBs transmitting the same service set, and (ii) how services are delivered. In particular, we consider scenarios where services can be delivered by resorting to the Random Network Coding principle or not. We compare the proposed resource allocation models to a strategy which equally shares the transmission power budget among layers of the delivered service. Analytical results show that the proposed resource allocation strategies are characterized by a transmission power which is on average 13% smaller than the considered alternative. In addition, we show that the resource allocation derived by the proposed strategy can deliver each layered video service over a geographical area which is up to 25% greater than that associated to the considered alternative.
This paper deals with the issue of reliably transmission of short messages through satellite. The problem is of paramount importance for emergency and alerting scenarios. In particular, we propose a novel efficient Network Coding (NC) communication scheme aiming to improve the delivery probability of the transmitted information messages. A suitable analytical approach has been developed in order to highlight the performance of the proposed NC scheme and to allow its optimization. The accuracy of the proposed approach has been validated by resorting to computer simulations. Performance comparisons with the classical NC scheme are also presented here to highlight the advantages of the proposed NC scheme in the case of AWGN and Rician communication channels.
In this paper, we propose a novel advanced multi-rate design for evolved Multimedia Multicast/Broadcast Service (eMBMS) in fourth generation (4G) Long-Term Evolution (LTE)/LTE-Advanced (LTE-A) networks. The proposed design provides: i) reliability, based on random network coded (RNC) transmission, and ii) efficiency, obtained by optimized rate allocation across multi-rate RNC streams. The paper provides an in-depth description of the system realization and demonstrates the feasibility of the proposed eMBMS design using both analytical and simulation results. The system performance is compared with popular multi-rate multicast approaches in a realistic simulated LTE/LTE-A environment.
This paper presents a novel energy-aware communication scheme based on random network coding that is suitable for multicast and broadcast data delivery over Long Term Evolution (LTE) and LTE-Advanced networks. The proposed energy-aware transmission scheme minimises the average energy consumption of the macro base station that is required to deliver a message to all users in a multicast group in an LTE-A network. The energy saving gains and improved performance of the proposed scheme are compared to classical error control strategies. The reported analytical results clearly show a performance improvement of almost two-fold compared to the considered alternative.
3GPP's Long Term Evolution (LTE) represents the one of the most valuable alternatives to offer a wireless broadband access in fully mobile network context. In particular LTE is able to manage several communication flows characterized by different QoS constrains. This paper deals with a network topology where the mobile users are clustered in Multicast Groups and the base station broadcasts a different traffic flow to each cluster. In order to improve the network throughput on a per-user basis, all communications rely on a Random Linear Network Coding (RLNC) scheme. A key aspect in the QoS management is represented by the power adaptation strategy in use. This paper proposes a novel convex formulation to the power adaptation problem for the downlink phase taking into account the specific RLNC scheme adopted by each communication flow. By the proposed convex formalization, an optimal solution of the problem can be early found in real time. Moreover, the proposed power adaptation strategy shows good performance for what concern throughput and fairness among the users when compared with other alternatives.
Long Term Evolution (LTE) is considered one of the main candidate to provide wireless broadband access to mobile users. Among main LTE characteristics, flexibility and efficiency can be guaranteed by resorting to suitable resource allocation schemes, in particular by adopting adaptive OFDM schemes. This paper proposes a novel solution to the sub-carrier allocation problem for the LTE downlink that takes into account the queues length, the QoS constraints and the channel conditions. Each user has different queues, one for each QoS class, and can transmit with a different data rate depending on the propagation conditions. The proposed algorithm defines a value of each possible sub-carrier assignment as a linear combination of all the inputs following a cross-layer approach. The problem is formulated as a Multidimensional Multiple-choice Knapsack Problem (MMKP) whose optimal solution is not feasible for our purposes due to the too long computing time required to find it. Hence, a novel efficient heuristic has been proposed to solve the problem. Results shows good performance of the proposed resource allocation scheme both in terms of throughput and delay while guarantees fairness among the users. Performance has been compared also with fixed allocation scheme and round robin.
The interest towards real-time computing has lead an even more interest in grid computing. While in the past the implementation of grid computing has been done on high performance computers, in the recent years there is an increasing interest in the pervasive grid scenarios, where multiple devices can be used for a distributed computing. The most challenging idea is to use mobile devices connected among them with wireless connections for setting up pervasive grid environments. In this context, it is a crucial problem the optimization of the routing algorithms among the processing nodes, in order to satisfy the performance requirements of a distributed computing. Aim of this paper is the design of specific routing algorithms for different pervasive grid applications with a particular attention to time sensitive scenarios.
In the last years an even more attention has been focused on emergency and crisis management systems. Both communication and computing aspects are of primary importance for a fast and reliable response in these particular scenarios. Differently from other approaches, in this paper we consider an integrated communication and computing solution, by proposing a joint approach, where a distributed computing platform and a heterogeneous meshed communication system interoperate in order to enhance the system reliability and readiness.
Network coding (NC) is a promising technique recently proposed to improve network performance in terms of maximum throughput, minimum delivery delay, and energy consumption. The original proposal highlighted the advantages of NC for multicast communications in wire-line networks. Recently, network coding has been considered as an efficient approach to improve performance in wireless networks, mainly in terms of data reliability and lower energy consumption, especially for broadcast communications. The basic idea of NC is to remove the typical requirement that different information flows have to be processed and transmitted independently through the network. When NC is applied, intermediate nodes in the network do not simply relay the received packets, but they combine several received packets before transmission. As a consequence, the output flow at a given node is obtained as a linear combination of its input flows. This chapter deals with the application of network coding principle at different communications layers of the protocol stack, specifically, the Medium Access Control (MAC) and physical (PHY) Layers for wireless communication networks.
The distributed computing is an approach relying on the presence of multiple devices that can interact among them in order to perform a pervasive and parallel computing. This chapter deals with the communication protocol aiming to be used in a distributed computing scenario; in particular the considered computing infrastructure is composed by elements (nodes) able to consider specific application requests for the implementation of a service in a distributed manner according to the pervasive grid computing principle. In the classical grid computing paradigm, the processing nodes are high performance computers or multicore workstations, usually organized in clusters and interconnected through broadband wired communication networks with small delay (e.g., fiber optic, DSL lines). The pervasive grid computing paradigm overcomes these limitations allowing the development of distributed applications that can perform parallel computations using heterogeneous devices interconnected by different types of communication technologies. In this way, we can resort to a computing environment composed by fixed or mobile devices (e.g., smartphones, PDAs, laptops) interconnected through broadband wireless or wired networks where the devices are able to take part to a grid computing process. Suitable techniques for the pervasive grid computing should be able to discover and organize heterogeneous resources, to allow scaling an application according to the computing power, and to guarantee specific QoS profiles. In particular, aim of this chapter is to present the most important challenges for the communication point of view when forming a distributed network for performing parallel and distributed computing. The focus will be mainly on the resource discovery and computation scheduling on wireless not infrastructured networks by considering their capabilities in terms of reliability and adaptation when facing with heterogeneous computing requests.
Delivery of Point-to-Multipoint (PtM) services over 4G cellular networks is gaining momentum. This thesis focuses on two different broadcast/multicast service types: fully reliable and delay sensitive services. The first category imposes that each PtM communication is delivered in an acknowledged fash- ion. On the other hand, the delay sensitive category embraces all those services aiming at broadcasting and multicasting, in an unacknowledged way, multimedia traffic flows (such as layered video services belonging to the H.264/SVC family). For what concerns fully reliable services, this thesis proposes a Modified HARQ scheme characterized by a minimum energy consumption and reduced delivery delivery. Furthermore, in a similar system model, we pro- pose an optimized error control strategy based on the Network Coding (NC) principle. Also in that case, the proposed strategy aims at minimizing the overall transmission energy and significantly reducing the communication delay. In addition, we propose multiple NC-based broadcast/multicast communication strategies suitable for delay sensitive services. We prove that they can efficiently minimize either the transmission energy or delivery delay. In particular, this thesis also refers to video service delivery over 3GPP's LTE and LTE-A networks as eMBMS flows. We address the problem of optimizing the radio resource allocation process of eMBMS video streams so that users, according to their propagation conditions, can receive services at the maximum achievable service level in a given cell (depending on their propagation conditions). Developed resource allocation models can minimize the overall radio resource footprint. This thesis also proposes an efficient power allocation model for delay sensitive services, delivered by the NC approach over OFDMA systems. The developed allocation model can significantly reduce the overall energy foot- print of the transmitting node.