Which of the following is experiencing rapid growth due to wireless technology?

Analysis and performance evaluation of the next generation wireless networks

Arash Maskooki, ... Nathalie Mitton, in Modeling and Simulation of Computer Networks and Systems, 2015

Exponential growth in demand for high bandwidth applications such as multimedia in cellular networks has triggered the need for development of new technologies capable of providing the required high-bandwidth, reliable links in wireless environments. However, the available spectrum is a limited natural resource and the current technologies have been shown to have low spectrum efficiency. To improve the spectrum efficiency, heterogeneous networks are considered as a viable solution. They are composed of a mix of macro, pico, femto and relay base-stations that improve the spectrum efficiency and throughput performance of the system. The aim of this chapter is to provide an overview of these emerging technologies. This chapter describes the main characteristics and performance requirements that the next generation networks must fulfill. Particularly, the focus is on Long Term Evolution (LTE)/LTE-Advanced technologies where some possible improvements and challenges are explained. Subsequently, the analytical methods and simulation techniques to evaluate the performance of the next generation heterogeneous networks are discussed. Finally, the simulation results for some example scenarios are provided and discussed.

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9780128008874000213

30th European Symposium on Computer Aided Process Engineering

Sarah Namany, ... Tareq Al-Ansari, in Computer Aided Chemical Engineering, 2020

Abstract

Exponential growth in the global population induces larger dependencies on natural resources to meet the demands for products causing resource depletion and environmental degradation. As such, there is an impetus to transform current industrial systems into systems that operate based on inherent sustainable values. While most of the current energy systems are based on polluting fossil fuels, alternative sources have exhibited high performances both economically and environmentally. Biomass for instance, represents a potential source for energy utilisation. Understanding the functioning of biomass gasification strategies as part of the biorefinery system’s operations is fundamental, yet, not sufficient to grant sustainable energy provision. The deployment of biomass technologies should also be assessed whilst considering the interaction with other energy sources. The purpose of this work is to design a dynamic and sustainable decision-making scheme that predicts the performance of the power generation system. The framework is developed as an agent-based model illustrating the several entities contributing to the shift towards a biomass-fuelled energy system. The power generation system and biomass feedstock producers are the main categories of agents interacting with one another following a set of rules restricting their behaviors with an ultimate aim to determine the optimal energy portfolio to meet energy demands whilst considering the contribution of existing natural-gas power plants and biomass blending. Rules of interactions impacting strategies adopted are imposed through two different scenarios representing the environmental and economic performances. Findings of this research demonstrate that under environmental restrictions, a diversified energy mix supporting the contribution of biomass is environmentally viable, as it contributes to a 34% reduction in emissions. However, it entails an economic expenditure amounting to 64% increase owing to the deployment of biomass technologies. The optimal biomass blending indicates the domination of manure feedstock over date pits, sludge and food waste in both scenarios, being the lowest in both economic and environmental costs.

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9780128233771503098

Radio Propagation and Propagation Path-Loss Models

Vijay K. Garg, in Wireless Communications & Networking, 2007

3.1 Introduction

Exponential growth of mobile communications has increased interest in many topics in radio propagation. Much effort is now devoted to refine radio propagation path-loss models for urban, suburban, and other environments together with substantiation by field data. Radio propagation in urban areas is quite complex because it often consists of reflected and diffracted waves produced by multipath propagation. Radio propagation in open areas free from obstacles is the simplest to treat, but, in general, propagation over the earth and the water invokes at least one reflected wave.

For closed areas such as indoors, tunnels, and underground passages, no established models have been developed as yet, since the environment has a complicated structure. However, when the environmental structure is random, the Rayleigh model used for urban area propagation may be applied. When the propagation path is on line of sight, as in tunnel and underground passages, the environment may be treated either by the Rician model or waveguide theory. Direct wave models may be used for propagation in a corridor.

In general, radio wave propagation consists of three main attributes: reflection, diffraction and scattering (see Figure 3.1) [2]. Reflection occurs when radio wave propagating in one medium impinges upon another medium with different electromagnetic properties. The amplitude and phase of the reflected wave are strongly related to the medium's instrinsic impedance, incident angle, and electric field polarization. Part of the radio wave energy may be absorbed or propagated through the reflecting medium, resulting in a reflected wave that is attenuated.

Figure 3.1. Reflection, diffraction and scattering of radio wave.

Diffraction is a phenomenon by which propagating radio waves bend or deviate in the neighborhood of obstacles. Diffraction results from the propagation of wavelets into a shadowy region caused by obstructions such as walls, buildings, mountains, and so on.

Scattering occurs when a radio signal hits a rough surface or an object having a size much smaller than or on the order of the signal wavelength. This causes the signal energy to spread out in all directions. Scattering can be viewed at the receiver as another radio wave source. Typical scattering objects are furniture, lamp posts, street signs, and foliage.

In this chapter, our focus is to characterize the radio channel and identify those parameters which distort the information-carrying signal (i.e., base band signal) as it penetrates the propagation medium. The several empirical models used for calculating path-loss are also discussed.

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9780123735805500375

Systems Analysis and Model Synthesis

Donald W. Boyd, in Systems Analysis and Modeling, 2001

1.6.1 Population Example

An exponential growth pattern results from an unconstrained population system. Bacteria, mice, and predators are typical of such systems. Mathematically, population level at any time t is expressed by

(1.1)xt=xoe(GR−DR)t.

Net growth results when GR > DR. Figure 1.16 illustrates a rapid-growth population system, showing a 10-year exponential increase from an initial population level of 10 million.

Figure 1.16. Exponential Growth

As typical of most systems, the data base is incomplete; its format is presented in Table 1.3 . In Equation 1.1, x 0 = 10 million. Other knowledge items are

Table 1.3. Population Data Base

Year X 1 X 2 X 3 X 4
1 10
2
undefined undefined undefined undefined
10

GR = 0.30 birth per unit of the population

DR = 0.10 death per unit of the population

Population system levels and rates are parameterized by four time-dependent variables defined as follows:

X 1 = population level initial to Δt

X 2 = population level terminal to Δt

X 3 = annual number of births during Δt

X 4 = annual number of deaths during Δt

Δt = T = 1 year

A system block diagram is presented in Figure 1.17 . Based on these parameters, simple illustrations of each of the three analytical approaches follow.

Figure 1.17. Population System

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9780121218515500010

Software-defined wireless network (SDWN)

Muge Erel, ... Berk Canberk, in Modeling and Simulation of Computer Networks and Systems, 2015

The exponential growth of mobile data usage has brought about the necessity of scalable and manageable network implementations into the current wireless technologies. With the increase in the number of mobile users and better Grade of Service (GoS) demands, current solutions are not sufficient to maintain the transmission quality in heterogeneous Radio Access Technologies (RATs). Moreover, spectrum efficiency in RATs is becoming crucial due to the fixed channel allocation for small cell base stations. In addition to this, the lack of scalable global control for offloading mobile data traffic between RATs makes it difficult to manage flows and control networks. Therefore, in this chapter these challenges are addressed to maintain scalable, controllable and manageable next generation deployments. Specifically, in order to increase scalability of wireless systems without conceding GoS of each user, the mobile traffic can be offloaded to the best suitable small cell by the aid of a “virtual” central authority using a Software Defined Networking (SDN) approach. Thanks to this virtualization capability of SDN, without making any alteration in physical topology, the solutions can be developed. In this chapter, we first revisit the basics of SDN technologies. Then, in order to solve the static channel assignment, we use a Software Defined Network model. It includes two algorithms: the topology control algorithm, which clusters several OpenFlow (OF) switches using the flows’ GoS, and the flow admission control algorithm, which fairly distributes each virtual switch according to the user demands. We also present a software defined framework to identify the appropriate base stations to switch and offload high service demanding mobile users.

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9780128008874000262

Exploring performance and predictive analytics of agriculture data

Madhavi Vaidya, Shweta Katkar, in AI, Edge and IoT-based Smart Agriculture, 2022

Abstract

The exponential growth and ubiquity of both structured and unstructured data have led us into the big data era. Big data analytics is increasingly becoming a trending practice that many organizations are adopting to construct valuable information from big data. This field has substantially attracted academics, practitioners, and industries.

But there are some challenges for big data processing and analytics that include integration of data, volume of data, the rate of transformation of data, and the veracity and validity of data. The history of griculture in India dates back to the Indus Valley civilization. Due to variations in climatic conditions, it has become challenging to achieve the desired results in crop yields. The use of technology in agriculture has increased in recent years and data analytics is one such trend that has penetrated the agriculture field. The main challenge in using big data in agriculture is identifying the effectiveness of big data analytics. Big data analysis can be processed and analyzed using parallel databases such as Talend or analytical paradigms like MapReduce on a Hadoop distributed file system. There are other mechanisms such as Weka and R, which are two of the most popular data analytical and statistical computing tools produced by the open source community, but there are certain challenges compared to the other techniques mentioned.

In this chapter, the comparative studies of various mechanisms will be provided that will give an insight to process and analyze big data generated from farms and the grains obtained from it according to the seasons, the soil health, and the location. In addition, various case studies are shown for data processing in context with planting, agricultural growth, and smart farming. From the experimentation, the authors have shown which is the right fertilizer for a specific state and soil. In addition, the authors have worked on the analysis of crop production per state and per year.

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B978012823694900030X

Hardware Accelerator Systems for Artificial Intelligence and Machine Learning

Amitabh Biswal, ... Zakir Hussain, in Advances in Computers, 2021

1.1 Motivation

With exponential growth of information online popularity and need of recommender system has increased substantially. With music streaming platform such as Spotify, Amazon music, Gaana becoming popular, more and more people are going online for listening to music people have access to vast library of music. With the access of high quality portable up-to-date music on the go, this becomes a problem for users to find music among huge library of music according to their taste. With the help of music recommender system, the system can give recommendation to users by analyzing user's music listening history, so that users can find new music without the hassle of spending time searching and listening to new music. Music recommender system can help companies to make more profits as more and more people will be listening to their music which in turn increase their popularity, and more people will be hooked to their music streaming platform.

Unfortunately most of the recommender systems that are designed till date, make use of explicit feedback that user provides, such as ratings and most of the user do not provide ratings to the music they are listening to. Providing rating to each and every music they listen to hinders the natural flow of listening to music as they have to put extra effort for providing ratings. Therefore implicit feedback is better as user does not have to care about the ratings. Implicit feedback is the data collected using user's usage with the music streaming service which is available in large number, i.e., number of times user listened to any song. Depending upon the domain implicit feedback will be different. The more user interacts with the system, more data will be available for the recommendation. Unlike explicit feedback where we give explicit rating to a song in implicit feedback we compare ratings relatively, e.g., if user has listened to music A 10 times and same user has listened to music B 5 times, then we deduce that the user has more preference of music A over music B.

With large amount of user data available, it becomes easier to make personal recommendation to each user. With the increase of computational power, deep learning is able to provide great accuracy when trained with large amount of data. So we will creating music recommender system by combining deep learning method called RBM and implicit feedback.

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/S0065245821000139

Beyond Sentiment

F. Pallavicini, ... F. Mantovani, in Sentiment Analysis in Social Networks, 2017

1 Introduction

The exponential growth in the use of digital devices, together with ubiquitous online access, provides unprecedented ground for the constant connectivity of people and offers tremendous capabilities for publicly expressing opinions, attitudes, or reactions regarding many aspects of everyday human activities [1]. Social media, such as blogs, forums, and social network platforms (eg, Facebook, LinkedIn, Twitter, Instagram, YouTube) are quickly becoming an integral part of people’s lives, the virtual spaces where daily individuals share opinions and information and maintain and/or expand their relational network. The massive use of online social networks and the abundance of data collected through them has raised exponentially the attention of the scientific and business community toward them [2–4]. Nowadays, the constant refinement of analytical tools is offering a richer array of opportunities to analyze these data for many different purposes [5]. Differences in features and characteristics of online social networks are reflected in the huge amount of different statistics and metrics that it is possible to track and analyze. The most adopted metrics are numeric, relatively easy to obtain, and freely available, such as engagement and influence metrics [6]. However, metrics of this types are often defined as “vanity metrics,” since they do not interpret or contextualize the data collected.1,2 For this reason, other types of methods of analysis has been introduced. Among them, one of the most used is sentiment analysis (SA) [7], which is the analysis of the feelings (ie, opinions, emotions and attitudes) behind the words using natural language processing tools. SA is considered a quality metric, which looks behind numbers to understand how information about emotion and attitudes is conveyed in language [7]. Given the rising interest in the application of SA to data from online social networks, the research in this area has acknowledged the limitations coming from handling the complex characteristics of natural language (and related inferences) without considering the data collected through social networks as “networked data.” Most of the work in SA [8, 9] is based merely on textual information expressed in online posts and comments. Early approaches to overcome this important limitation are emerging in recent literature, trying, for example, to leverage information on friendship relationships between individuals, since connected users may be likelier to hold similar opinions3 [10, 11]. However, these features only approximate the rich relation structure encoded in an online social network. Among possible complementary analytical methods that are starting to be introduced in the analysis of data collected through online social networks, one of the most interesting is social network analysis (SNA), which, through a quantitative-relational approach, makes it possible to consider relational data (ie, existing connections and links between users on social networks). Within this context this chapter will first define online social networks and briefly describe their history, highlighting the differences and specific features that characterize them. Then the psychological and sociological processes underlying online social network interactions will be discussed within the framework of relevant theoretical constructs and methods of analysis (with special focus on SNA). Finally, the chapter will point out how this understanding can be effectively integrated into SA methodological approaches to empower their reliability and validity.

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9780128044124000024

Security Issues of the Web of Things

Saad El Jaouhari, ... Jean-Marie Bonnin, in Managing the Web of Things, 2017

14.2.3.2 Security Requirements for the IoT

The exponential growth of the number of deployed devices and the size of data that will be exchanged on the network raised several challenges in order to achieve a global architecture. Those challenges concern not only the operational part, but also, and most importantly, the security and the privacy in such environment [28]. What makes the uniqueness of the IoT are the properties that need to be treated in order to define the security and the privacy challenges. Such properties, as explained in [29], are the result of the analysis of many related researches in the IoT field, they are mainly four properties: 1) Uncontrolled environment which is natural for an environment such as the Web especially when dealing with mobile Things (from one domain to another), physically accessible and requiring the establishment of trust relationships in order to exchange sensible information. 2) Heterogeneity since the IoT environment may integrate various types of entities coming from different origins. 3) Scalability related to the plethora of Things that need to be interconnected, hence a highly scalable protocols need to be applied and 4) Constrained resources in term of energy, computation capabilities and storage space. The same analysis shows that the security requirements can be grouped into five main sections: Network Security, Identity Management, Privacy, Trust, and Resilience.

Network Security:

Preventing eavesdropping, tampering, spoofing, denial of service, and so on, of sensitive information when they are sent via the Internet, either from a Thing to another or from a Thing to human, is an important requirement for network security. Confidentiality requires the establishment of a secure communication for the IoT's smart objects, specially when they communicate through the Internet. Traditionally, several technologies such as IPSec and TLS have been proven to fulfill the requirement, however they require significant cryptographic computations that exceed the capacities of the current IoT devices. Thus, dedicated secure network stack for the IoT needs to provide strong and lightweight encryption, so that the constrained devices can benefit from the same security functionalities that are typical of unconstrained domains. Most of the solutions trend to use a trusted unconstrained node to offload the computationally intensive tasks such as the calculation of the master session key. Another property guaranteed by the encryption is the Integrity of the data to ensure they are not altered during their way to the destination. While Authenticity provides proof that a connection is established with an authenticated entity, it can also include the integrity. And finally, the Things need always to be available meaning that the connectivity of a Thing should persist even under link failure, referring to the Availability property. Secure routing is also one of the issues that can occur in the network, and it can be instantiated through the implementation of secure routing with a strong protocol such as RPL [30].

Identity Management:

First of all, each object needs to be aware of its own resources such as identity, constraints, security requirement, etc. Due to the enormous number of devices deployed in the Internet, and the complex relationships that they can have with each other, appropriate identity management mechanisms need to be present. Still identity management alone is not sufficient showing the importance of authentication, authorization, accountability and revocation mechanisms. Authentication is very important to IoT and is likely to be the first operation carried out by a node when it joins a new network, which appears in first deployment or mobility cases as examples. Usually authentication is performed using an authentication server with a network access protocol such as PANA or Extensible Authentication Protocol (EAP) [16]. As for the management of the access authorization and the ownership of resources, federated authorization such as Kerberos and OAuth have the possibility to provide delegation of access across domains and provide quick revocation. A presentation of an authorization framework is presented in 14.2.3.3. As for the Accountability, it needs to deal with the massive amount of data that will be exchanged. Also mechanisms to manage the identity of the nodes and a key management with protocols such Authenticated Key Exchange (AKE) can be compatible with IoT [16].

Privacy:

Objects dealing directly with the private information of individuals and organizations raise a challenging privacy issue in IoT. The environment needs to provide data privacy for the data transmitted in the Internet, in the sense that traffic sniffed containing those data will not reveal/expose its content. For this reason mechanisms for data anonymity, pseudonimity and unlinkability need to be used to guarantee the privacy of the data in on hand and the entity itself (human or device) in the other hand.

Trust:

Giving a proper definition of the Trust specially in a distributed architecture such IoT is still a challenge, since any trusted entity can become malicious either intentionally or after being compromised. However, the Trust in this case can be separated into three parts. The first part is the Device Trust, since a prior trust cannot always be established due to the mobility and the distributions properties of IoT. However, approaches such as trusted computing [31] and computational trust [32] can solve the problem. The second part is the Entity Trust referring to the expected behavior of the different entities. This part presents more challenges. Solutions based on the behavior analysis and the application of proper policies need to be investigated. The last part is the Data Trust, which can use the previous established trust relationships to judge the trustworthiness of the data e.g., data originating from a trusted entity might be also trusted.

Resilience:

And finally the Resilience requirement, where the IoT applications need to ensure the availability of the resources in case of system failure, and also to have robustness against the different attacks.

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9780128097649000184

Big Data Analytics

Vito Giovanni Castellana, ... Oreste Villa, in Handbook of Statistics, 2015

4 Related Work

The exponential growth in the availability of data, especially of datasets that have significant amount of relationships among their elements (such as all kinds of networks, from social, to security, to biomedical, or the internet at large itself), has led to the development of a large number of database systems (both commercial and at the research level) able to manage RDF datasets.

Some databases do not support named graphs. Most (if not all) RDF databases support—to varying degrees—the standard SPARQL query language. Open source RDF databases include: Jena SDB (Apache Jena, 2014) backed by relational databases; Jena TDB (Apache Jena, 2014) backed by native, disk-based storage; Sesame (SesameRDF, 2014) with support for layering on top of relational databases or a native backend; Virtuoso Open Source edition (Virtuoso, 2014); and 4store (Harris et al., 2009). Research-level RDF database approaches include: RDF-3X (Neumann and Weikum, 2010), Hexastore (Weiss et al., 2008), YARS2 (Harth et al., 2007), SHARD (Rohloff and Schantz, 2010), BitMat (Atre et al., 2010), and SPARQL queries on PIG (Kotoulas et al., 2012). Commercial RDF databases include: Bigdata (BigDataDB, 2014); BigOWLIM (Ontotext, 2014) with a single-node edition (OWLIM-SE) and a replication cluster edition (OWLIM-Enterprise); Virtuoso (Virtuoso, 2014), which has a cluster edition; 5store (successor to 4store) implemented for clusters; Urika (YarcData, 2014), a data analytics appliance featuring a graph database, built on the Cray XMT.

Solutions that try to implement RDF databases on top of conventional relational SQL-based systems usually incurs in overheads. Obtaining feature-complete SPARQL-to-SQL translation is difficult, and may introduce performance penalties. Translating SPARQL to SQL implies the use of relational algebra to perform optimizations, and the use of classical relational operators (e.g., conventional joins and selects) to execute the query. By translating SPARQL to graph pattern-matching operations, GEMS reduces the overhead for intermediate data structures and can exploit optimizations that look at the execution plan (i.e., order of execution) from a graph perspective. Many of these solutions rely on front-ends running on Java that generate more or less standardized representations and API calls. GEMS, instead, provides a full, custom, software stack. Queries are translated to efficient C++ code with calls and accesses to fully customized, and optimized, API and data structures. With a typical high-performance computing approach, the code effectively executed by GEMS is compiled, efficient, C++ code.

Some solutions process queries in-memory while others, instead, store data on disks and perform swapping. Jena (Apache Jena, 2014) and Sesame (SesameRDF, 2014), for example, provide libraries that natively implement in-memory RDF storage and support integration with some disk-based, SQL backends. OpenLink Virtuoso (Virtuoso, 2014) implements an RDF/SPARQL layer on top of their SQL-based column store for which multi-node, cluster support is available. GEMS adopts in-memory processing: it stores all data structures in RAM. In-memory processing potentially allows increasing the dataset size while maintaining constant query throughput by adding more cluster nodes. In-memory processing is, in general, the current approach of choice for processing large graphs at the highest speed possible, given the high irregularity of the data structures and the algorithms. Similarly to our system, Trinity.RDF (Zeng et al., 2013), built on top of the Trinity graph processing platform (Shao et al., 2013), operates in memory, processing queries through graph exploration procedures. However, it mainly focuses on matching Basic Graph Patterns; the primitives provided by SGLib instead, allow supporting various forms of SPARQL queries, including solution modifiers or featuring complex graph patterns.

Some of the mentioned approaches leverage Map Reduce frameworks to achieve scalability on clusters. For example, SHARD (Rohloff and Schantz, 2010) is a native triplestore built on top of Hadoop, while YARS2 (Harth et al., 2007) is a bulk-synchronous, distributed, query answering system. Both exploit hash partitioning to distribute triples across nodes. Mapping SPARQL queries to PIG (Kotoulas et al., 2012) abstracts operations over the PIG language, which is compiled into MapReduce jobs. These approaches work well for simple index lookups, but they also present high-communication overheads for moving data through the network with more complex queries, as well as introduce load-balancing issues in the presence of data skew. GEMS abstracts, instead, operations at a lower level, using data structures and primitives from SGLIB, which implement graph primitives and table operations.

4store (and its successor 5store) (Harris et al., 2009) directly interfaces to low-level operations. 4store differentiates processing and storage nodes (although processing backends can execute on the same node with storage backends, they still incur in TCP/IP communication overheads). Segments of the datasets are non-overlapping and uniformly distributed. However, segments for which there is significant dataskew can be replicated. There is not direct communications among storage nodes. Processing nodes always send a single request at a time to storage nodes. However, since a storage node may host multiple segments, and there is a different connection per segment, a request can be sent to all the segments and then all the replies aggregate. 4store has been demonstrated to scale up to 32 nodes (and the authors used it up to 9 nodes). The commercial successor, 5store, is projected to reach thousands of nodes, although it appears that storage scaling, rather than query throughput, is the objective of such approach. GEMS, instead, adopts a distinct, custom runtimes to implement hardware-specific optimizations to enable scaling in size of the dataset while also scaling performance. The runtime focuses on optimizing critical aspects of the irregular behavior of graph methods applied to large graphs, while also providing the features required for query processing.

Urika is a commercial shared memory system from YarcData (2014) targeted at big data analytics. Urika exploits a Cray XMT 2 machine, which implements custom nodes with purpose-built multithreaded processors (barrel processors with up to 128 threads and a very simple cache). Beside multithreading, which allows tolerating latencies for accessing data on remote nodes, the system has hardware support for a scrambled global address space and fine-grained synchronization. These features allow more efficient execution of irregular algorithms, such as graph methods. On top of this hardware, YarcData interfaces with the Jena framework to provide a front-end API. GEMS, instead, exploits clusters built with commodity components. These are cheaper to acquire and maintain, and evolve more rapidly than custom hardware. On the other hand, the software layers of GEMS have been completely custom built to support query processing. GEMS has a custom compiler and dedicated API and data structures, aiming at providing the most optimized path possible from query entry to result return.

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9780444634924000149

Which of the following wireless technologies is most commonly used for inventory tracking?

Bluetooth / BLE for Asset Tracking Bluetooth is one of the most common technologies used for indoor positioning applications such as asset tracking.

Which of the following is a low power shorter range wireless technology that is intended to replace wired connectivity for accessories such as speakers or a mouse?

Zigbee is a wireless communication technology for short-range, low-power digital radio communications. Relative to WiFi and Bluetooth, Zigbee uses very little power and a low data transfer rate.

What are the four wireless communication network categories?

There are four types of wireless networks -- wireless local area networks, wireless metropolitan area networks, wireless personal area networks and wireless wide area networks -- each with its own function.

What is a wireless technology that transmits signals over short distances among cell phones computers and other devices?

Bluetooth was developed as a PAN for connecting computers, cell phones, and other devices up to a range of about 30 feet. It is a low-speed data transmission method. Bluetooth operates in the same 2.4- to 2.483-GHz unlicensed spectrum as Wi-Fi.

Toplist

Neuester Beitrag

Stichworte