Year: 2018
Author: Sampé, Josep
Research Area: Distributed storage
In a world that is increasingly dependent on technology, digital data is generated in an unprecedented way. This makes companies that require large storage space, such as Netflix or Dropbox, use cloud storage solutions where data is remotely maintained, managed, and backed up, in an easy and cheap way. Particularly, cloud object stores are widely adopted and increasingly used for storing these huge amounts of data. This is mainly thanks to their built-in characteristics, such as simplicity, scalability and high-availability. Moreover, the evolution of cloud computing, in what refers, for example, to data analysis, make cloud object stores an important actor in today’s cloud ecosystem.
Year: 2017
Author: Chaabouni, Rahma
Research Area: Peer-to-peer computing
With the ever increasing Internet traffic, peer-to-peer (P2P) content distribution has emerged as an alternative to the traditional client-server model, especially with the recent bandwidth soar on the edges of the Internet. Data centers with limited bandwidth budget can benefit from the upload speed of the clients interested in the same content to improve the overall Quality of Service (QoS). This can be done by introducing a P2P protocol, BitTorrent for instance, when the load on a certain content becomes high.
Year: 2015
Author: Gracia-Tinedo, Raúl
Research Area: Distributed storage
Nowadays, end-users require higher amounts of reliable and available on-line space to store their personal information (e.g., documents, pictures). This motivates researchers to devise and evaluate novel personal storage systems in order to cope with the growing storage demands of users. In this dissertation, we focus our efforts to study two emerging personal storage architectures: Personal Clouds and social storage systems. As one can easily infer, both architectures are radically different and pursue distinct goals.
Year: 2011
Author: Pàmies-Juárez, Lluís
Research Area: Distributed storage
Over the last decade, users’ storage demands have been growing exponentially year over year. Besides demanding more storage capacity and more data reliability, today users also demand the possibility to access their data from any location and from any device. These new needs encourage users to move their personal data (e.g., E-mails, documents, pictures, etc.) to online storage services such as Gmail, Facebook, Flickr or Dropbox. Unfortunately, these online storage services are built upon expensive large datacenters that only a few big enterprises can afford.
Year: 2010
Author: Mondéjar-Andreu, Rubén
Research Area: Collaborative applications and middleware
In this PhD dissertation we present a distributed middleware proposal for large-scale application development. Our main aim is to separate the distributed concerns of these applications, like replication, which can be integrated independently and transparently. Our approach is based on the implementation of these concerns using the paradigm of distributed aspects. In addition, our proposal benefits from the peer-to-peer (P2P) networks and aspect-oriented programming (AOP) substrates to provide these concerns in a decentralized, decoupled, efficient, and transparent way.
Year: 2010
Author: Pujol-Ahulló, Jordi
Research Area: Peer-to-peer computing
This thesis defines a generic framework that allows building high level services, of both data search and content distribution, for structured peer-to-peer networks (SPN). We consider a twofold genericity: (i) Extensible framework for services and applications, with a dynamic deploy over other P2P systems; and (ii) generic and portable framework over most of the SPNs.
Year: 2009
Author: Sánchez-Artigas, Marc
Research Area: Peer-to-peer computing
A peer-to-peer (P2P) overlay network is a logical network, built on top of an underlying physical network, which facilitate the location of distributed resources without centralized control. These systems have emerged on the edges of the Internet thanks to the generalized growth of broadband Internet connections.
Year: 2007
Author: Pairot-Gavaldà, Carles
Research Area: Collaborative applications and middleware
Distributed systems have evolved considerably in recent years. Depending on the scalability level required, several solutions can be found to develop distributed applications. If the number of users is relatively low, there are several centralized client-server models, with a rather simple subjacent architectural complexity, which provide an acceptable performance.
Year: 2020
Author: ,
Research Area: Cloud Computing
Within the last decade, cloud computing has become the paradigm adopted by a lot of companies to deploy and administer their applications and infrastructure. Based on features such as high availability, elasticity, and cost on demand this technology has revolutionized the way in which software is designed and deployed. However, the cloud computing is not still adopted broadly in the ambit of High Performance Computing and this is due to the fact that, like each new technology or paradigm that arises, this requires overcoming a learning curve to allow their users to dominate it correctly. In this scenario, there are still a lot of scientists and engineers who need to deal with this challenge plus the problems that they are trying to solve. With the idea to democratize cloud computing access and facilitate its use for new inexperienced users, new frameworks and tools have been in development. With the goal to collaborate with this democratization process, we did this study, which makes a deep analysis about Functions as a Service services and demonstrated it as an alternative to move high performance computing solutions to cloud computing.
Year: 2012
Author: Moreno-Martínez, Adrián
Research Area: Distributed storage
Personal storage is a mainstream service used by millions of users. Among the existing alternatives, Friend-to-Friend (F2F) systems are nowadays an interesting research topic aimed to leverage a secure and private o-site storage service.
However, the specic characteristics of F2F storage systems (reduced node degree, correlated availabilities) represent a serious obstacle to their performance. Actually, it is extremely dicult for a F2F system to guarantee an acceptable storage service quality in terms of transference times and data availability to end users. In this landscape, we propose resorting to the Cloud for improving the storage service of a F2F system.
Year: 2011
Author: Gracia-Tinedo, Raúl
Research Area: Security and trust
Distributed Hash Tables (DHTs) have been used as a common building block in many distributed applications, including Peer-to-Peer (P2P), Cloud and Grid Computing. However, there are still important security vulnerabilities that hinder their adoption in today’s large-scale computing platforms. For instance, routing vulnerabilities have been a subject of intensive research but existing solutions are mainly based on redundancy.
Year: 2008
Author: Arrufat-Arias, Marcel
Research Area: Mobile Ad-hoc Networks
Year: 2008
Author: París, Gerard
Research Area: Mobile Ad-hoc Networks
Mobile ad hoc networks (MANET) are wireless network which do not rely on any fixed infrastructure. The nodes in a MANET may not have all other nodes in radio range, so each node must act both as final node and as router. The interest in MANETs has grown with the increasing popularity of wireless enabled devices (laptops, PDA, mobile phones).
Year: 2020
Author: Eizaguirre, Germán T.
Research Area: Cloud Computing
The serverless paradigm brings cloud computing close to not specialized programmers, with principles such as simplification, scalability, and pay-per-use billing. Serverless architectures open the way to process otherwise unimaginable data volumes from a standard personal computer, removing embarrassing resource provisioning tasks. In this context, different frameworks for serverless data analytics have emerged during the past years, like PyWren and Lithops, among others. However, the stateless nature of serverless functions makes it difficult for them to host workloads with heavy communication between functions. Shuffle-like jobs, like the MapReduce sort, are especially problematic in such conditions. Current solutions for serverless shuffle jobs are not fully transparent to the user and do not fulfill the principles of the serverless paradigm. In this work we introduce a completely transparent serverless sort utility built on top of Lithops that takes the user away from any resource management. We include uncommon concepts in serverless systems such as speculative and asynchronous MapReduce execution. We present a mathematic model to infer the optimal number of workers for each sort job and we prove its effectiveness. Finally, we test the performance of our algorithm against a standardized sort benchmark.
Year: 2020
Author: Arjona, Aitor
Research Area: Cloud Computing
In this final year project it is presented Triggerflow, a new extensible and serverless in design platform for the orchestration of serverless workflows in the cloud. Triggerflow follows an Event-Condition-Action model with stateful and dynamic triggers to be able to filter, aggregate, process, and route events from different sources. Thanks to the extensibility offered by the trigger’s fully programmable condition and action with a consistent and persistent state, we can orchestrate different serverless workflow abstractions such as Acyclic Directed Graphs, State Machines, and Code Workflows. Triggerflow has been implemented on open source systems of the Cloud Native Computing Foundation, such as CloudEvents, Kubernetes and KEDA (Kubernetes Event-dirven autoscaling). Triggerflow has been proven to be able to process large numbers of events per second, aggregating and synchronizing massively parallel tasks on serverless functions efficiently and with fault tolerance. A real scientific workflow on Triggerflow has been implemented in order to study the feasibility of event-based orchestration for this type of works. This work has been developed as part of the elaboration of the article "Triggerflow: Event-based orchestration for Serverless Workflows", presented and accepted to the ACM Distributed Event-Based Systems conference in 2020.
Year: 2020
Author: Roca-Llaberia, Pol
Research Area: Cloud Computing
Serverful architectures, those that are built using resources administrated by the user, have been an effective way to accumulate resources for large scale computing for many years. Despite having some disadvantages, such as their low elasticity, up until now most of the efforts have been dedicated towards constructing libraries and frameworks designed to be executed on clusters of computers or virtual machines, which is the serverful architecture par excellence. Nonetheless, with the new advances in cloud computing coming from totally different approaches, serverful systems providing the best solution to large scale workflows started becoming, at least, an arguable assumption. In fact, the new wave of technologies could finally allow the implementation of transparency, a fairly promising aspect that grants developers the ability to design and program applications independently from the characteristics of resources that will be employed. Transparency, along with the elasticity presented by some of these new technologies, could potentially provide a better solution to the current approach to machine learning, nowadays the most popular branch from artificial intelligence due to its synergy with large datasets, since many of these algorithms are complex by design and they require more flexibility in the implementation.