1.1 INFORMATION CONSISTENCY
Numerous developments are being introduced as the epoch of Cloud Computing, which is an cyberspace-based progress and use of supercomputer expertise. Most powerful processors which were too expensive to begin with have become cheaper by the help of pooling processing power and providing the processing power on demand. The development of high speed internet with increased bandwidth have increased the quality of services leading to better customer satisfaction which the most primitive goal of any organization.
The migration of data from the users’ computer to the remote data centers have the provided the customer with great and reliable convenience. Amazon simple storage services are the well-known examples which are one of the pioneers of cloud services. The eliminate the need of maintain the data on a local system which is a huge boost for increasing quality of service. But due to this the customers are always as the mercifulness of the cloud service provider as their downtime causes the user to be unable to access his own data. Since every coin has two sides, likewise cloud computing has its own fair share of security threats and also there may be some threats which are yet to be discovered. Considering from the user’s point of view, he wants his data to be secure therefore, data security is the most important aspect which will ultimately lead to the customer satisfaction. The users’ have limited control on their own data so the conventional cryptography measures cannot be adopted. Thus, the data stored on the cloud should be verified occasionally to ensure the data has not been modified without informing the owner. The data which is rarely used is sometimes moved to lower tier storage making it more vulnerable for attacks. On the other note, Cloud Computing not only stores the data but also provides the user with functionality like modifying the data, appending some information to it or permanently deleting the data. To assure the integrity of data various hashing algorithms can be used to create checksums which will alert the user about the data modifications.
1.2 PROBLEM DEFINITION
Firstly, traditional cryptographic primitives for the purpose of data security protection cannot be directly adopted due to the users’ loss control of data under Cloud Computing. Whenever it comes to the matter relating to cloud services the user is put at a disadvantage regarding to the security of the file. Basically the file is stored on a server which is a pool resource that is any one with user’s credentials can access the file and if in case the attacker comes to know about the password as well as the encryption keys the attacker can modify the file contents, thus making the information stored in the file to be accessed by the unauthorized user. So, the problem is that what if someone copy’s your work and claims to be his own work. Anything we design , anything we invent is governed by the principle of whether or not it guarantees customer satisfaction.
Hence, the problem is underlying whether the customer can rest assured that his data is safe from unauthorized access or not.
1.3 PROJECT PURPOSE
In our purposed system, we provide assurance to the user that his information is safe by “implementing a system which provides security mechanisms by offering three levels of security”. Concerning about the data security part, our system is divided mainly into three modules named “IP triggering” module, “client-authentication” module and “redirecting” module. The system generates a user password and a key which is used for client authentication.
The algorithm generates two keywords 8 bit length consisting of combinations of characters, special characters, and numbers which is used for client authorization and file authorization.
Questions may arise as why do we use keys of 8 bit length only? The purpose of our system is to prevent illegal data access if the users’ credential are compromised. By testing against weak algorithms which are easier to crack we design our system to be more robust.
1.4 PROJECT FEATURES
Our scheme would be to prevent illegal access of users’ data. A user after getting himself registered on the system will have the advantage of different layers of security. The most primitive work our system is to inform the user that his data has been accessed from an unregistered ip by using mail triggering events. For login, the attacker tries to access the file by using the credentials stolen from the victim, and upon entering is provided with a dialog box to enter a key. The attacker tries to enter the key which won’t be accepted by any means. The attacker is provided with a three tries so that he can go back. After 3 tries, the attacker is provided with the access of the fake file which is implemented by the redirection module.
1.5 MODULES DESCRIPTION
1.5.1 CLOUD STORAGE
Data outsourcing to cloud storage servers is raising trend among many firms and users owing to its economic advantages. This essentially means that the owner (client) of the data moves its data to a third party cloud storage server which is supposed to – presumably for a fee – faithfully store the data with it and provide it back to the owner whenever required. Cloud storage increases maintainability and decreases storage cost associated with storage.
1.5.2 SIMPLY ARCHIVES
This problem tries to obtain and verify a proof that the data that is stored by a user at remote data storage in the cloud (called cloud storage archives or simply archives) is not modified by the archive and thereby the integrity of the data is assured.
The file is encrypted using symmetric key algorithms ( same key is used for encryption and decryption of data) before storing it in cloud storage. Cloud archive is not cheating the owner, if cheating, in this context, means that the storage archive might delete some of the data or may modify some of the data.
While developing proofs for data possession at untrusted cloud storage servers we are often limited by the resources at the cloud server as well as at the client.
In this scheme, unlike in the key-hash approach scheme, only a single key can be used irrespective of the size of the file or the number of files whose retrievability it wants to verify. Also the archive needs to access only a small portion of the file F unlike in the key-has scheme which required the archive to process the entire file F for each protocol verification. If the prover has modified or deleted a substantial portion of F, then with high probability it will also have suppressed a number of sentinels.
1.5.4 VERIFICATION PHASE:
The verifier before storing the file at the archive preprocesses the file and appends some Meta data to the file and stores at the archive. At the time of verification the verifier uses this Meta data to verify the integrity of the data. If the metadata matches the already stored metadata in database then there is inconsistency in file and user user is alerted with a warning message.l It is important to note that our proof of information consistency protocol just checks the integrity of data i.e. if the data has been illegally modified or deleted. It does not prevent the archive from modifying the data.
2.1 CLOUD COMPUTING
Literature survey is the most important step in software development process. Before developing the tool it is necessary to determine the time factor, economy and company strength. Once these things are satisfied, then next steps is to determine which operating system and language can be used for developing the tool. Once the programmers start building the tool the programmers need lot of external support. This support can be obtained from senior programmers, from book or from websites. Before building the system the above consideration are taken into account for developing the proposed system. We have to analysis the Cloud Computing Outline Survey:
Cloud computing providing unlimited infrastructure to store and execute customer data and program. As customers you do not need to own the infrastructure, they are merely accessing or renting; they can forego capital expenditure and consume resources as a service, paying instead for what they use.
Instead of running programs and data on an individual desktop computer, everything is hosted in the “cloud”—a nebulous assemblage of computers and servers accessed via the Internet. Cloud computing lets you access all your applications and documents from anywhere in the world, freeing you from the confines of the desktop and making it easier for group members in different locations to collaborate.
In short, cloud computing enables a shift from the computer to the user, from applications to tasks, and from isolated data to data that can be accessed from anywhere and shared with anyone. The user no longer has to take on the task of data management; he doesn’t even have to remember where the data is. All that matters is that the data is in the cloud, and thus immediately available to that user and to other authorized users.
Benefits of Cloud Computing:
• Minimized Capital expenditure
• Location and Device independence
• Utilization and efficiency improvement
• Very high Scalability
• High Computing power
How secure is encryption Scheme:
• Is it possible for all of my data to be fully encrypted?
• What algorithms are used?
• Who holds, maintains and issues the keys?
• Encryption accidents can make data totally unusable.
• Encryption can complicate availability Solution
2.2 EXISTING SYSTEM
As data generation is far outpacing data storage it proves costly for small firms to frequently update their hardware whenever additional data is created. Also maintaining the storages can be a difficult task. It transmitting the file across the network to the client can consume heavy bandwidths. The problem is further complicated by the fact that the owner of the data may be a small device, like a PDA (personal digital assist) or a mobile phone, which have limited CPU power, battery power and communication bandwidth.
• The main drawback of this scheme is the high resource costs it requires for the implementation.
• Also computing hash value for even a moderately large data files can be computationally burdensome for some clients (PDAs, mobile phones, etc).
• Data encryption is large so the disadvantage is small users with limited computational power (PDAs, mobile phones etc.).
• Consumption of large amount of bandwidth in transmission of file.
2.3 PROPOSED SYSTEM
One of the important concerns that need to be addressed is to assure the customer of the integrity i.e. correctness of his data in the cloud. As the data is physically not accessible to the user the cloud should provide a way for the user to check if the integrity of his data is maintained or is compromised. In this paper we provide a scheme which gives a proof of data integrity in the cloud which the customer can employ to check the correctness of his data in the cloud. This proof can be agreed upon by both the cloud and the customer and can be incorporated in the Service level agreement (SLA). It is important to note that our proof of data integrity protocol just checks the integrity of data i.e. if the data has been illegally modified or deleted.
? Apart from reduction in storage costs data outsourcing to the cloud also helps in reducing the maintenance.
? Avoiding local storage of data.
? By reducing the costs of storage, maintenance and personnel.
? It reduces the chance of losing data by hardware failures.
? Not cheating the owner.
2.4 SOFTWARE DESCRIPTION
C# (pronounced see sharp) is a multi-paradigm programming language encompassing strong typing, imperative, declarative, functional, generic, object-oriented (class-based), and component-oriented programming disciplines. It was developed by Microsoft within its .NET initiative and later approved as a standard by Ecma (ECMA-334) and ISO (ISO/IEC 23270:2006). C# is one of the programming languages designed for the Common Language Infrastructure. Support for internationalization is very important.
The ECMA standard lists the design goals for C# as:
• C# language is intended to be a simple, modern, general-purpose, object-oriented programming language.
• The language, and implementations thereof, should provide support for software engineering principles such as strong type checking, array bounds checking, detection of attempts to use uninitialized variables, and automatic garbage collection. Software robustness, durability, and programmer productivity are important.
• The language is intended for use in developing software components suitable for deployment in distributed environments.
• Source code portability is very important, as is programmer portability, especially for those programmers already familiar with C and C++.
• C# is intended to be suitable for writing applications for both hosted and embedded systems, ranging from the very large that use sophisticated operating systems, down to the very small having dedicated functions.
• Although C# applications are intended to be economical with regard to memory and processing power requirements, the language was not intended to compete directly on performance and size with C or assembly language.
2.4.2 .NET FRAMWORK PLATFORM ARCHITECTURE
Microsoft .NET is a set of Microsoft software technologies for rapidly building and integrating XML Web services, Microsoft Windows-based applications, and Web solutions. The .NET Framework is a language-neutral platform for writing programs that can easily and securely interoperate. The .NET framework provides the foundation for components to interact seamlessly, whether locally or remotely on different platforms. It standardizes common data types and communications protocols so that components created in different languages can easily interoperate.
XML WEB SERVICES Windows Forms
Base Class Libraries
Common Language Runtime
Fig 2.1 NET Framework Architecture
The .NET Framework has two main parts:
1. The Common Language Runtime (CLR).
2. A hierarchical set of class libraries.
The CLR is described as the “execution engine” of .NET. It provides the environment within which programs run. The most important features are:
• Conversion from a low-level assembler-style language, called Intermediate Language (IL), into code native to the platform being executed on.
• Memory management, notably including garbage collection.
• Checking and enforcing security restrictions on the running code.
• Loading and executing programs, with version control and other such features.
Common Type System
The CLR uses something called the Common Type System (CTS) to strictly enforce type-safety. This ensures that all classes are compatible with each other, by describing types in a common way. CTS define how types work within the runtime, which enables types in one language to interoperate with types in another language, including cross-language exception handling.
As well as ensuring that types are only used in appropriate ways, the runtime also ensures that code doesn’t attempt to access memory that hasn’t been allocated to it.
Common Language Specification
The CLR provides built-in support for language interoperability. To ensure that you can develop managed code that can be fully used by developers using any programming language, a set of language features and rules for using them called the Common Language Specification (CLS) has been defined. Components that follow these rules and expose only CLS features are considered CLS-compliant.
THE CLASS LIBRARY
.NET provides a single-rooted hierarchy of classes, containing over 7000 types. The root of the namespace is called System; this contains basic types like Byte, Double, Boolean, and String, as well as Object. All objects derive from System. Object.
As well as objects, there are value types. Value types can be allocated on the stack, which can provide useful flexibility. There are also efficient means of converting value types to object types if and when necessary.
The OLAP Services feature available in SQL Server version 7.0 is now called SQL Server 2000 Analysis Services. The term OLAP Services has been replaced with the term Analysis Services. Analysis Services also includes a new data mining component. The Repository component available in SQL Server version 7.0 is now called Microsoft SQL Server 2000 Meta Data Services. References to the component now use the term Meta Data Services. The term repository is used only in reference to the repository engine within Meta Data Services SQL-SERVER database consist of following type of objects:
A database is a collection of data about a specific topic.
VIEWS OF TABLE:
We can work with a table in two types,
1. Design View
2. Datasheet View
To build or modify the structure of a table we work in the table design view. We can specify what kind of data will be hold.
To add, edit or analyses the data itself we work in tables datasheet view mode.
A query is a question that has to be asked the data. Access gathers data that answers the question from one or more table. The data that make up the answer is either dynaset (if you edit it) or a snapshot (it cannot be edited).
JScript (and the other languages) can be used for both simple tasks (such as mouseovers on Web pages) and for more complex tasks (such as updating a database with ASP or running logon scripts for Windows NT ).
Windows Script relies on external “object models” to carry out much of its work. For example, Internet Explorer’s DOM provides objects such as ‘document’ and methods such as ‘write()’ to enable the scripting of Web pages.
2.4.5 ASP or ACTIVE SERVER PAGES
When a browser requests an ASP file, the ASP.NET engine reads the file, compiles and executes the scripts in the file, and returns the result to the browser as plain HTML. ASP.NET supports three different development models:
Web Pages, MVC (Model View Controller), and Web Forms:
Single Pages Model MVC
Model View Controller Web Forms
Event Driven Model
1. Simplest ASP.NET model.
2. Similar to PHP and classic ASP.
3. Built-in templates and helpers for database, video, graphics, social media and more.
1. MVC separates web applications into 3 different components.
2. Models for data
Views for display
Controllers for input
1.Traditional ASP.NET event driven development model.
2. Web pages with added server controls, server events, and server code.
Fig 2.2 Development Models for ASP.NET
ASP.NET is a new ASP generation. It is not compatible with Classic ASP, but ASP.NET may include Classic ASP. ASP.NET pages are compiled, which makes them faster than Classic ASP. ASP.NET has better language support, a large set of user controls, XML-based components, and integrated user authentication.
ASP.NET pages have the extension .aspx, and are normally written in VB (Visual Basic) or C# (C sharp). User controls in ASP.NET can be written in different languages, including C++ and Java.
Here are highlights of some of the new features:
Navigation: ASP.NET has a new higher-level model for creating site maps that describe
your website. Once you create a site map, you can use it with new navigation controls
to let users move comfortably around your website.
Master pages: With master pages, you can define a template and reuse it effortlessly. On a similar note, ASP.NET themes let you define a standardized set of appearance characteristics for controls, which you can apply across your website for a consistent look.
Data providers: With the new data provider model, you can extract information from a database and control how it’s displayed without writing a single line of code. ASP.NET 2.0 also adds new data controls that are designed to show information with much less hassle (either in a grid or in a browser view that shows a single record at a time).
Portals: One common type of web application is the portal, which centralizes different
information using separate panes on a single web page.
Administration: To configure an application in ASP.NET 1.x, you needed to edit a
configuration file by hand. Although this process wasn’t too difficult, ASP.NET 2.0
streamlines it with the WAT (Website Administration Tool), which works through a
web page interface.
3.1 FUNCTIONAL REQUIREMENTS
In software engineering, a functional requirement defines a function of a software system or its module used. A function is defined as a set of inputs, the behavior, and outputs. Functional requirements may be calculations, technical details, data handling and processing and other specific functionality that define what a system is supposed to achieve. Behavioral requirements describing all the cases where the system uses the functional requirements are captured in use cases.
Here, the system has to do the following tasks:
• Take user id and password along with secret key, match it with corresponding database entries. If a match is found then continue else raise an error message.
• Encrypt the file to form a new encrypted file by using an encryption algorithm.
• Must be able to retrieve the original file from the encrypted file using the corresponding decryption algorithm.
• If any modification is performed on encrypted file, owner of the file should be notified.
3.2 NON-FUNCTIONAL REQUIREMENTS
In systems engineering and requirements engineering, a non-functional requirement is a requirement that specifies criteria that can be used to judge the operation of a system, rather than specific behaviors. This should be contrasted with functional requirements that define specific behavior or functions. The plan for implementing functional requirements is detailed in the system design. The plan for implementing non-functional requirements is detailed in the system architecture.
Other terms for non-functional requirements are “constraints”, “quality attributes”, “quality goals”, “quality of service requirements” and “non-behavioral requirements”.
Some of the quality attributes are as follows:
Accessibility is a general term used to describe the degree to which a product, device, service, or environment is accessible by as many people as possible.
In our project people who have registered with the cloud can access the cloud to store and retrieve their data with the help of a secret key sent to their email ids.
User interface is simple and efficient and easy to use.
In software engineering, maintainability is the ease with which a software product can be modified in order to:
• Correct defects
• Meet new requirements
New functionalities can be added in the project based on the user requirements just by adding the appropriate files to existing project using ASP.net and C# programming languages.
Since the programming is very simple, it is easier to find and correct the defects and to make the changes in the project.
System is capable of handling increase total throughput under an increased load when resources (typically hardware) are added.
System can work normally under situations such as low bandwidth and large number of users.
Portability is one of the key concepts of high-level programming. Portability is the software code base feature to be able to reuse the existing code instead of creating new code when moving software from an environment to another.
Project can be executed under different operation conditions provided it meet its minimum configurations. Only system files and dependant assemblies would have to be configured in such case.
3.3 HARDWARE REQUIREMENTS
• Processor : Dual Core Processor
• Hard Disk : 40 GB.
• Floppy Drive : 1.44 Mb.
• Monitor :VGA and High Resolution Monitor.
• Input Device : Standard Keyboard and Mouse.
• Ram : 256 Mb.
3.4 SOFTWARE REQUIREMENTS
• Operating system : Windows 10/8/7/XP
• Front End : JAVA, Swing(JFC),RMIJ2ME
• Back End : MS-Access
• Tool : Eclipse
4.1 DESIGN GOALS
To enable secure outsourcing of file under the aforementioned model, our mechanism design should achieve the following security and performance guarantees:
4.1.1 INPUT/OUTPUT PRIVACY
No sensitive information from the customer’s private data can be derived by the cloud server during performing the encryption and transfer.
The local computations done by customer should be substantially less than. The computation burden on the cloud server should be within the comparable time complexity of existing practical algorithms for encryption and decryption of files.
4.2 SYSTEM ARCHITECTURE
Here the client sends the query to the server. Based on the query the server sends the corresponding file to the client. Then the client authorization is done by checking user id and password. In the server side, it checks the client name and its password for security process. If it is satisfied and then received the queries form the client and search the corresponding files in the database. Finally, find that file and send to the client. If the server finds the intruder means, it set the alternative Path to those intruders. If any intruders tries to access any file then multiple times password is asked to them and at last the intruders are directed to fake file. Here intruders will not know that the file he obtained is fake. They think that the file they got is original one.
Fig 4.1 System Architecture
4.3 DATA FLOW DIAGRAM
The Data Flow Diagram(DFD) is also named as bubble chart diagram which is a simple graphical representation that can be used to represent a system. The system representation is done in terms of the input data to the system, the various processing carried out on these data, and the output data is generated by the system.
Fig 4.2 Data Flow Diagram
4.4 SEQUENCE DIAGRAM
The sequence diagrams are an easy way of describing the system’s behavior. It focuses on the interaction between the system and the environment. This UML diagram shows the interaction arranged in a time sequence. It has two dimensions: the vertical dimension and the horizontal dimension. The vertical dimension used in UML sequence diagram represents the time and the horizontal dimension used represents the different objects. The vertical line is also called the object’s lifeline. It represents the object’s presence during the interaction.
Fig 4.3 Sequence Diagram
4.5 USE CASE DIAGRAM
A use-case diagram is a graph of users or actors. It is a set of use cases enclosed by a system boundary which is also the participation associations between the actors and the use-cases, and generalization among the use cases.
So, the use-case is the description of the outside (actors or users) and inside(use-case) of the system’s typical behavior. An ellipse having the name is used to show the use case which is initiated by actors or users.
An Actor or a user is the one that communicates with a use-case. Name of the actors is written down and a arrow symbol is used to show the interaction between actor and use-case.
Fig 4.4 Use Case Diagram
4.6 CLASS DIAGRAM
Fig 4.5 Class Diagram
4.7 ACTIVITY DIAGRAM
An activity diagram consists of numerous states that represent several operations. The transition from one state to the other is triggered by the completion of the operation. A round box having operation name is used in the diagram. For the execution of that operation, an operation symbol is used for indication. An activity diagram shows the inner state of an object.
Fig 4.6 Activity Diagram
Among the various stages of project, the part which converts the theoretical design into a working system is known as Implementation, thus making it one of the critical phase for developing a successful system.
In Implementation phase we carefully plan as well as probe the existing system keeping in mind the constraints of the implementation.
5.1 MAIN MODULES
5.1.1 CLIENT MODULE
In this module, the server receives a query sent from the client. Depending upon the query, the client is served the required files by the server. Before the server serves the request, authorization of client takes place. The server matches the client credentials for security. Only if it matches with the database the request is serviced and the corresponding file is served. If by any means, unauthorized user is detected redirection to the dummy file takes place.
5.1.2 SYSTEM MODULE
The above figure illustrates the network architecture of the cloud data .
Figure 1. Three different network entities can be identified as follows:
Clients, who have information to be put away in the cloud and depend on the cloud for information calculation, comprise of both individual customers and associations.
CLOUD SERVICE PROVIDER (CSP)
A CSP, is a person who has substantial assets and skills in structuring and supervising dispersed cloud storage hosts, possesses and controls live Cloud Computing systems,.
THIRD PARTY INSPECTOR(TPI)
A voluntary TPI, who expertise’s and abilities that consumers may not have, is
Trust worthy to evaluate and uncover hazard of cloud storage facilities on behalf of the consumers upon demand.
5.1.3 CLOUD DATA STORAGE MODULE
The user’s data is stored into cloud servers by the help of CSP, which are being processed in a successive manner, the user contact with the servers via CSP for accessing or retrieving his own data. In rare case scenarios, the user may feel the need for performing minute level modifications on the data. Users if provided with some security means so that they can perform data modifications on server level without the need of storing them on their own system. The optional TPI can be used for monitoring the data for the users who have trouble for maintaining time. In our purposed system, each and every communication between the user and the server is authenticated which provides reliability to our system.
5.1.4 CLOUD AUTHENTICATION SERVER
The Authentication Server (AS) implements functionality as most of the AS would with three levels of security in addition to the traditional client-authentication practice. In first addition the client authentication info is sent to the masked router. The AS used in this purposed system also has functionalities such as a ticketing personnel, regulatory approvals on the system network. The other functionalities may include such as updating of client lists, reducing client authentication time or revoking the access of a user.
5.1.5 UNAUTHORIZED DATA MODIFICATION AND CORRCORRUPTION MODULE
The important aspect of our purposed system is to prevent unauthorized access to the file which may result in data modification or rather corruption of data. Also it should be able to provide information regarding the unauthorized user like: time of access as well as the ip address of the unauthorized intruder.
5.1.6 ANTAGONIST MODULE
The threats can be originated from two different sources. A cloud service provider can have malicious intents who may move the data to a less secure storage and may also hide data losses which might occur due to several errors.
Also considering the other aspect, a person who possess the ability to compromise a number of cloud storage servers may perform data modification attacks while remaining undetected from the cloud service provider.
The purpose of testing is to discover errors. Testing is the process of trying to discover every conceivable fault or weakness in a work product. It provides a way to check the functionality of components, sub assemblies, assemblies and/or a finished product it is the process of exercising software with the intent of ensuring that the Software system meets its requirements and user expectations and does not fail in an unacceptable manner. There are various types of test. Each test type addresses a specific testing requirement.
TYPES OF TESTS
6.1 UNIT TESTING
Unit testing involves the design of test cases that validate that the internal program logic is functioning properly, and that program inputs produce valid outputs. All decision branches and internal code flow should be validated. It is the testing of individual software units of the application .it is done after the completion of an individual unit before integration. This is a structural testing, that relies on knowledge of its construction and is invasive. Unit tests perform basic tests at component level and test a specific business process, application, and/or system configuration. Unit tests ensure that each unique path of a business process performs accurately to the documented specifications and contains clearly defined inputs and expected results.
6.2 INTEGRATION TESTING
Integration tests are designed to test integrated software components to determine if they actually run as one program. Testing is event driven and is more concerned with the basic outcome of screens or fields. Integration tests demonstrate that although the components were individually satisfaction, as shown by successfully unit testing, the combination of components is correct and consistent. Integration testing is specifically aimed at exposing the problems that arise from the combination of components.
6.3 VALIDATION TESTING
An engineering validation test (EVT) is performed on first engineering prototypes, to ensure that the basic unit performs to design goals and specifications. It is important in identifying design problems, and solving them as early in the design cycle as possible, is the key to keeping projects on time and within budget. Too often, product design and performance problems are not detected until late in the product development cycle — when the product is ready to be shipped. The old adage holds true: It costs a penny to make a change in engineering, a dime in production and a dollar after a product is in the field.
Verification is a Quality control process that is used to evaluate whether or not a product, service, or system complies with regulations, specifications, or conditions imposed at the start of a development phase. Verification can be in development, scale-up, or production. This is often an internal process.
Validation is a Quality assurance process of establishing evidence that provides a high degree of assurance that a product, service, or system accomplishes its intended requirements. This often involves acceptance of fitness for purpose with end users and other product stakeholders.
The testing process overview is as follows:
Figure 6.1: The testing process
6.4 SYSTEM TESTING
System testing of software or hardware is testing conducted on a complete, integrated system to evaluate the system’s compliance with its specified requirements. System testing falls within the scope of black box testing, and as such, should require no knowledge of the inner design of the code or logic.
As a rule, system testing takes, as its input, all of the “integrated” software components that have successfully passed integration testing and also the software system itself integrated with any applicable hardware system(s).
System testing is a more limited type of testing; it seeks to detect defects both within the “inter-assemblages” and also within the system as a whole.
System testing is performed on the entire system in the context of a Functional Requirement Specification(s) (FRS) and/or a System Requirement Specification (SRS).
System testing tests not only the design, but also the behavior and even the believed expectations of the customer. It is also intended to test up to and beyond the bounds defined in the software/hardware requirements specification(s).
6.5 TESTING OF INITIALIZATION AND UI COMPONENTS
Serial Number of Test Case TC 01
Module Under Test DATABASE Connection
When the client program is executed, it tries to connect to DATABASE (SQL server) using the data source and catalogue.
Output If the connection details are correct, the DATABASE is connected. If the connection details are incorrect, an exception is thrown.
Remarks Test Successful.
Table 6.1: Test case for connection setup
Serial Number of Test Case TC 02
Module Under Test User Registration
Description A page where users enter their details for registering themselves to the DATABASE server.
Input Details of Users such as first name, last name, age, mail id, etc…
Output If the user’s details are correct and matches the correct format, user is registered. If the user is already registered, an Exception is thrown.
Remarks Test Successful.
Table 6.2: Test Case for User Registration
Serial Number of Test Case TC 03
Module Under Test User Login
Description When the user tries to log in, details of user
Are verified with the DATABASE.
Input UserId and Password and secret key.
Output If the login details are correct, the user is logged in and user page is displayed. If the login details are incorrect.
Remarks Test Successful.
Table 6.3: Test Case for User Login
Serial Number of Test Case TC 04
Module Under Test File Upload
Description When the user submits the problem, problem is stored in the DATABASE after encryption.
Input User selects the file to be submitted.
Output If the file details are correct, the file is encrypted and stored in DATABASE. A security key is sent to owner’s mail for verification.
Remarks Test Successful.
Table 6.4: Test Case for File Upload
Serial Number of Test Case TC 05
Module Under Test Secret Key Verification
Description When the user enters the Secret Key for login or his submitted file, it is verified with the server.
Input Secret key
Output If the secret Key value matches with that stored in the DATABASE, User can verifies the content and can grant permission for download. If the secret Key value doesn’t match, a message is displayed.
Remarks Test Successful.
Table 6.5: Test Case for Verifying Secret Key
Serial Number of Test Case TC 06
Module Under Test File modification
Description When unauthorised user changes the content of file
Output Message from the Admin
Remarks Test Successful.
Table 6.6: Test Case for Modification performed
Fig 7.1 Screen Layout of Main Page
Fig 7.2 Screen Layout when ID and Password are correct
Fig 7.3 Screen Layout of User Login
Fig 7.4 Snapshot of User Login Asking Unique Key
Fig 7.5 Screen Layout for Administrator
Fig 7.6 Screen Layout to add New User
Fig 7.7 Snapshot of Available Recourses which user can Download
Fig 7.8 Screen Layout Showing Restricted IP
Fig 7.9 Screen Layout of Available Recourses
Fig 7.10 Snapshot of Hackers Information
Fig 7.11 Snapshot of Adding Recourses
Fig 7.12 Screen Layout of blocking a IP
Fig 7.13 Snapshot of Removing Blocked IP
CONCLUSION AND FUTURE ENHANCEMENT
In this paper, we examined the problem for security associated issues while storing the information on cloud. To prevent and implement illegal access to a user’s data on cloud we devised an efficient system architecture which supports efficient operations on data like modification of data, deleting or appending the data. We implemented ip triggering which triggers a mail to the user’s registered email address on unauthorized access of data. On the second level, we have used a password login system with a key to prevent access of files other than the sole owner. Even if the key is exposed the system detects illegal access by comparing the ip against its database and if they don’t match successfully redirecting it to the dummy file thus preventing the users’ data from being corrupted or modified.
The area of cloud computing is still in bloom so still many vulnerabilities are not discovered and has decent amount of challenges. The most hopeful development which we can expect from cloud computing is to provide the user with decent amount of control of their own data. Automating the system to check for any modifications in data by using hashing algorithms and calculating checksum for checking with any modifications attack like SHA-checksum.
8.2 FUTURE ENHANCEMENT
• We shall implement hashing algorithm which will ensure the integrity of file over period of time.
• The mobile alerts functionality will make the users’ updated about any modification attacks performed on the file.
• We will implement additional layer of security by using the owner’s mac address which will be unique to each user.
OOPS ? Object Oriented Programming Concepts
TCP/IP ? Transmission Control Protocol/Internet Protocol
JDBC ? Java Data Base Connectivity
EIS ? Enterprise Information Systems
BIOS ? Basic Input/Output System
RMI ? Remote Method Invocation
JNDI ? Java Naming and Directory Interface
ORDBMS ? Object Relational Database Management System
CSP ? Cloud Service Provider (CSP)
J2ME ? Java 2 Micro Edition
1. Amazon.com, “Amazon Web Services (AWS),” Online at http://aws. amazon.com, 2008.
N. Gohring, “Amazon’s S3 down for several hours,” Online
2. Athttp://www.pcworld.com/businesscenter/article/142549/amazo s s3 down for several hours.html, 2008.
3. A. Juels and J. Burton S. Kaliski, “PORs: Proofs of Retrievability for Large Files,” Proc. of CCS ’07, pp. 584–597, 2007.
4. H. Shacham and B. Waters, “Compact Proofs of Retrievability,” Proc. of Asiacrypt ’08, Dec. 2008.
5. K. D. Bowers, A. Juels, and A. Oprea, “Proofs of Retrievability: Theory and Implementation,” Cryptology ePrint Archive, Report 2008/175, 2008, http://eprint.iacr.org/.
6. G. Ateniese, R. Burns, R. Curtmola, J. Herring, L. Kissner, Z. Peterson, and D. Song, “Provable Data Possession at Untrusted Stores,” Proc. Of CCS ’07, pp. 598–609, 2007.
7. G. Ateniese, R. D. Pietro, L. V. Mancini, and G. Tsudik, “Scalable and Efficient Provable Data Possession,” Proc. of SecureComm ’08, pp. 1– 10, 2008.
8. T. S. J. Schwarz and E. L. Miller, “Store, Forget, and Check: UsingAlgebraic Signatures to Check Remotely Administered Storage,” Proc.
9.3 SITES REFERRED