Logo
    • English
    • Ελληνικά
    • Deutsch
    • français
    • italiano
    • español
  • English 
    • English
    • Ελληνικά
    • Deutsch
    • français
    • italiano
    • español
  • Login
View Item 
  •   University of Thessaly Institutional Repository
  • Επιστημονικές Δημοσιεύσεις Μελών ΠΘ (ΕΔΠΘ)
  • Δημοσιεύσεις σε περιοδικά, συνέδρια, κεφάλαια βιβλίων κλπ.
  • View Item
  •   University of Thessaly Institutional Repository
  • Επιστημονικές Δημοσιεύσεις Μελών ΠΘ (ΕΔΠΘ)
  • Δημοσιεύσεις σε περιοδικά, συνέδρια, κεφάλαια βιβλίων κλπ.
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.
Institutional repository
All of DSpace
  • Communities & Collections
  • By Issue Date
  • Authors
  • Titles
  • Subjects

Probabilistic data allocation in pervasive computing applications

Thumbnail
Author
Kolomvatsos K.
Date
2020
Language
en
DOI
10.1109/ISPA-BDCloud-SocialCom-SustainCom51426.2020.00152
Keyword
Big data
Cloud computing
Social networking (online)
Statistics
Ubiquitous computing
Aggregation schemes
Fault tolerant systems
Internet of thing (IOT)
Pervasive computing applications
Probabilistic approaches
Probabilistic data
Research communities
Services and applications
Internet of things
Institute of Electrical and Electronics Engineers Inc.
Metadata display
Abstract
Pervasive Computing (PC) deals with the placement of services and applications around end users for facilitating their everyday activities. Current advances on the Internet of Things (IoT) and the Edge Computing (EC) provide the room for adopting their infrastructures and hosting the desired services for supporting PC applications. Numerous devices present in IoT and EC infrastructures give the opportunity to record and process data through the interaction with users and their environment. Upon these data, the appropriate processing should be realized as requested by end users or applications. It is efficient to process such requests as close as possible to end users to limit the latency in the provision of responses. The research community, identifying this need, proposes the use of the EC as the appropriate place to perform the discussed processing which has the form of tasks or queries. Tasks/queries set specific conditions for data they desire imposing a number of requirements for the dataset upon which the desired processing should be executed. It is wise to pre-process the data and detect their statistics to know beforehand if it is profitable to have any dataset as part of the requested processing. This paper focuses on a model that is responsible to efficiently distribute the collected data to the appropriate datasets. We store similar data to the same datasets and keep their statistics solid (i.e., we meet a low deviation) through the use of a probabilistic approach. The second part of the proposed approach is related to an aggregation scheme upon multiple outlier detection methods. We decide to transfer outliers to Cloud avoiding to store them locally as they will jeopardize the solidity of datasets. If data are going to be locally stored, we provide a mechanism for selecting the most appropriate dataset to host them while we perform a controlled replication to support a fault tolerant system. The performance of the proposed models is evaluated by a high number of experiments for different scenarios. © 2020 IEEE.
URI
http://hdl.handle.net/11615/75004
Collections
  • Δημοσιεύσεις σε περιοδικά, συνέδρια, κεφάλαια βιβλίων κλπ. [19735]
htmlmap 

 

Browse

All of DSpaceCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsThis CollectionBy Issue DateAuthorsTitlesSubjects

My Account

LoginRegister (MyDspace)
Help Contact
DepositionAboutHelpContact Us
Choose LanguageAll of DSpace
EnglishΕλληνικά
htmlmap