S. No. | IEEE TITLE | ABSTRACT | IEEE YEAR |
1 | CDA Generation and Integration for Health Information Exchange Based on Cloud Computing System | Successful deployment of Electronic Health Record helps improve patient safety and quality of care, but it has the prerequisite of interoperability between Health Information Exchange at different hospitals. The Clinical Document Architecture (CDA) developed by HL7 is a core document standard to ensure such interoperability, and propagation of this document format is critical for interoperability. Unfortunately, hospitals are reluctant to adopt interoperable HIS due to its deployment cost except for in a handful countries. A problem arises even when more hospitals start using the CDA document format because the data scattered in different documents are hard to manage. In this paper, we describe our CDA document generation and integration Open API service based on cloud computing, through which hospitals are enabled to conveniently generate CDA documents without having to purchase proprietary software. Our CDA document integration system integrates multiple CDA documents per patient into a single CDA document and physicians and patients can browse the clinical data in chronological order. Our system of CDA document generation and integration is based on cloud computing and the service is offered in Open API. Developers using different platforms thus can use our system to enhance interoperability. | 2016 |
2 | Heuristics for Provisioning Services to Workflows in XaaS Clouds | In XaaS clouds, resources as services (e.g., infrastructure, platform and software as a service) are sold to applications such as scientific and big data analysis workflows. Candidate services with various configurations (CPU type, memory size, number of machines and so on) for the same task may have different execution time and cost. Further, some services are priced rented by intervals that be shared among tasks of the same workflow to save service rental cost. Establishing a task-mode (service) mapping (to get a balance between time and cost) and tabling tasks on rented service instances are crucial for minimizing the client-oriented cost to rent services for the whole workflow. In this paper, a multiple complete critical-path based heuristic (CPIS) is developed for the task-mode mapping problem. A list based heuristic (LHCM) concerning the task processing cost and task-slot matching is developed for tabling tasks on service instances based on the result of task-mode mapping. Then, the effectiveness of the proposed CPIS is compared with that of the previously proposed CPIL, the existing state-of-the-art heuristics including PCP, SC-PCP ( an extension to PCP), DET, and CPLEX. The effectiveness of the proposed LHCM is evaluated with its use with different task-mode mapping algorithms. Experimental results show that the proposed heuristics can reduce 24 percent of the service renting cost than the compared algorithms on the test benchmarks at most for non-shareable services. In addition, half of the service renting cost could be saved when LHCM is applied to consolidate tasks on rented service instances. | 2016 |
3 | Trust-but-Verify: Verifying Result Correctness of Outsourced Frequent Itemset Mining in Data-Mining-As-a-Service Paradigm | Cloud computing is popularizing the computing paradigm in which data is outsourced to a third-party service provider (server) for data mining. Outsourcing, however, raises a serious security issue: how can the client of weak computational power verify that the server returned correct mining result? In this paper, we focus on the specific task of frequent itemset mining. We consider the server that is potentially untrusted and tries to escape from verification by using its prior knowledge of the outsourced data. We propose efficient probabilistic and deterministic verification approaches to check whether the server has returned correct and complete frequent itemsets. Our probabilistic approach can catch incorrect results with high probability, while our deterministic approach measures the result correctness with 100 percent certainty. We also design efficient verification methods for both cases that the data and the mining setup are updated. We demonstrate the effectiveness and efficiency of our methods using an extensive set of empirical results on real datasets. | 2016 |
4 | Data-Driven Composition for Service-Oriented Situational Web Applications | The convergence of Services Computing and Web 2.0 gains a large space of opportunities to compose “situational” web applications from web-delivered services. However, the large number of services and the complexity of composition constraints make manual composition difficult to application developers, who might be non-professional programmers or even end-users. This paper presents a systematic data-driven approach to assisting situational application development. We first propose a technique to extract useful information from multiple sources to abstract service capabilities with a set tags. This supports intuitive expression of user’s desired composition goals by simple queries, without having to know underlying technical details. A planning technique then exploits composition solutions which can constitute the desired goals, even with some potential new interesting composition opportunities. A browser-based tool facilitates visual and iterative refinement of composition solutions, to finally come up with the satisfying outputs. A series of experiments demonstrate the efficiency and effectiveness of our approach. | 2015 |
5 | Continuous and Transparent User Identity Verification for Secure Internet Services | Session management in distributed Internet services is traditionally based on username and password, explicit logouts and mechanisms of user session expiration using classic timeouts. Emerging biometric solutions allow substituting username and password with biometric data during session establishment, but in such an approach still a single verification is deemed sufficient, and the identity of a user is considered immutable during the entire session. Additionally, the length of the session timeout may impact on the usability of the service and consequent client satisfaction. This paper explores promising alternatives offered by applying biometrics in the management of sessions. A secure protocol is defined for perpetual authentication through continuous user verification. The protocol determines adaptive timeouts based on the quality, frequency and type of biometric data transparently acquired from the user.
The functional behavior of the protocol is illustrated through Matlab simulations, while model-based quantitative analysis is carried out to assess the ability of the protocol to contrast security attacks exercised by different kinds of attackers. Finally, the current prototype for PCs and Android smartphones is discussed.
|
2015 |
6 | Designing High Performance Web-Based Computing Services to Promote Telemedicine
Database Management System |
Many web computing systems are running real time database services where their information change continuously and expand incrementally. In this context, web data services have a major role and draw significant improvements in monitoring and controlling the information truthfulness and data propagation. Currently, web telemedicine database services are of central importance to distributed systems. However, the increasing complexity and the rapid growth of the real world healthcare challenging applications make it hard to induce the database administrative staff. In this paper, we build an integrated web data services that satisfy fast response time for large scale Tele-health database management systems. Our focus will be on database management with application scenarios in dynamic telemedicine systems to increase care admissions and decrease care difficulties such as distance, travel, and time limitations. We propose three-fold approach based on data fragmentation, database websites clustering and intelligent data distribution. This approach reduces the amount of data migrated between websites during applications’ execution; achieves cost effective communications during applications’ processing and improves applications’ response time and throughput. The proposed
approach is validated internally by measuring the impact of using our computing services’ techniques on various performance features like communications cost, response time, and throughput. The external validation is achieved by comparing the performance of our approach to that of other techniques in the literature. The results show that our integrated approach significantly improves the performance of web database systems and outperforms its counterparts. |
2015 |