Chapter 1INTRODUCTIONSometimes we are passing by a Restaurant and we don’t know about the deals in that restaurant and the other menu prices

Chapter 1INTRODUCTIONSometimes we are passing by a Restaurant and we don’t know about the deals in that restaurant and the other menu prices. For that purpose, we have to go inside the restaurant to check the deals and menu prices, which is a time consuming and embarrassing in the sense that if you do not have enough money in your wallet then you have to leave the restaurant.

So, we are working on a Smartphone application which will allow the user to get the price menu and deals on the screen of the user’s Smartphone.

We Will Write a Custom Essay Specifically
For You For Only $13.90/page!


order now

1.1 OverviewTo design an augmented reality-based application that will provide instant access of latest deals, promotions and menu on a user’s Smartphone.1.2 Statement of ProblemIn daily lifecycle everyone is very busy in their life and the technology is on its peak by facilitating and bringing comfort in our daily life. We can get instant access to almost anything through our smart phones. Our application is also based on such idea through which we will be able to get Fast Food restaurant’s menu/deals on our Smartphone without physically entering in the restaurant.
So we are designing an android application in which the above mentioned problems will be tackled and users will no longer have to worry about food/deals.

1.3Purpose of the ProjectOur project/application is for android users and residents of Islamabad only. Those who have android Smartphone will easily download the application from Google Play Store. The scope of the project is limited to those fast food restaurants which exists only in Islamabad which have their own websites because the idea behind this application is that a user will capture an image of the sign board or logo of the restaurant and in our application, it will match the patterns/text of the image with the database in which we have already inserted the value.1.4Applications of the researchOur project will have worth for the quick users or specially for tourists. We will be targeting the food industry which will have the impact on the users in the sense of instant access to the restaurant and also to the restaurant owners in the sense they will get free promotion and they will have instant and smart users.1.5 SummaryWe are working on a Smartphone application which will allow the user to get the price menu and deals on the screen of the user’s Smartphone. Those who have android Smartphone will easily download the application from Google Play Store. Our project will have worth for the quick users or especially for tourists.

Chapter 2LITERATURE REVIEWThe purpose of this chapter is to describe the literature studied and analyzed to develop an Augmented Reality based application Deal Hunter. This chapter explains the reference to the different research papers, current existing similar projects and our proposed working of android application.

2.1.Past Work
In the past scenario in the restaurant industry, the users are facing difficulty in going to a restaurant and after giving a look to the menu orders something. What if the food is not affordable or the taste is not according to the price? This will happen to give a bad impression to the user.

There are different applications which provide instant access to rate list of menu but only in restaurants i.e. Menu snap, Snap food, etc.

It’s so much more convenient than having to Google search for your food items one by one. Menu Snap makes all that information about your food one tap away from you. Start using it when you visit that shiny new restaurant in the neighborhood, scan all their food names, learn how they look like and how to translate each of your food items to over 50 languages.

Figure 2.1 Scan Menu

Figure 2.2 Search Menu
Another way to access a restaurant is through the website or a landline/cellular contact number. But what if the line is busy or the website is not up-to-date or the server is down you’re unable to access the restaurant.
The things don’t provide ease to the user and also a time taking activity.

2.2.Our Work
Now-a-days people are living in a smart generation; mostly people have Smartphone in their pocket all the time.

We are working on a Smartphone application which will give the menu and other details about the restaurant in user’s Smartphone screen.
Probably you’re thinking about how it will work? Very easy, just capture the name/logo of the restaurant and the application will bring out the menu and other details regarding the restaurant to you.
Best part of project is active response of application and time saving. It takes milliseconds to search the restaurant by its logo.

2.3Research Work
We have studied and researched a lot of paper work regarding our project, Vuforia and digital image processing.

2.3.1 Vuforia
Vuforia is an Augmented Reality Software Development Kit (SDK) for cell phones that empowers the formation of Augmented Reality applications.
The Vuforia SDK underpins an assortment of 2D and 3D target composes including marker less Image Targets, 3D Multi-Target designs, and a type of addressable Fiduciary Marker known as a Frame Marker. Extra highlights of the SDK incorporate limited Occlusion Detection utilizing ‘Virtual Buttons’, runtime picture target choice, and the capacity to make and reconfigure target sets automatically at runtime.
2.4Their Limitations and Bottlenecks
Our project/application is for android users and residents of Islamabad only. Those who have android Smartphone’s will easily download the application from Google Play Store. The scope of the project is limited to those fast food restaurants which exists only in Islamabad which have their own websites because the idea behind this application is that a user will capture an image of the sign board or logo of the restaurant and in our application, it will match the patterns/text of the image with the database in which we have already inserted the value.2.5 SummaryThis chapter explains the reference to the different research papers, current existing similar projects and our proposed working of android application. Working on a Smartphone application which will give the menu and other details about the restaurant in user’s Smartphone screen.

Chapter 3TOOLS AND TECHNIQUESThe System usage is a comprehension of a specialized detail or calculation as a program, programming part, or other PC framework through programming and arrangement. In this part framework usage is clarified in detail. It incorporates clarification about devices and methods used to finish this assignment, and the clarification of calculations utilized.3.1Hardware used with technical specificationsThis system is tested on an android device containing 6.0 (Marshmallow) android version. The device has 5.5-inch display and 1080×1920 resolution with 3GB RAM and Kirin 655 Octa-Core 4x 2.1 GHz + 4x 1.7 GHz processor (Honor 6X).

3.2Software(s), simulation tool(s) used
3.2.1 Augmented Reality
Augmented reality is a wide field of technology, combining multiple research areas, where the goal is to merge the physical reality together with computer generated graphics. A definition of Augmented Reality can be found in a survey by Azuma:
This survey defines AR as systems that have the following three characteristics:
Combines real and virtual
Interactive in real time
Registered in 3-D
3.2.1.1 Augmented Reality Working
What is Augmented Reality for a large number of us infers a specialized side. For AR a specific scope of information (pictures, liveliness, recordings, and 3D models) might be utilized and individuals will see the outcome in both normal and engineered light. Likewise, clients know about being in reality which is progressed by PC vision, not at all like in VR.
AR can be shown on different gadgets: screens, glasses, handheld gadgets, cell phones, head-mounted presentations. It includes advances like S.L.A.M. (synchronous limitation and mapping), profundity following (quickly, a sensor information computing the separation to the articles), and the accompanying parts:
Cameras and sensors
Sensors are for the most part outwardly of the augmented reality gadget and assemble a client’s genuine collaborations and impart them to be prepared and translated. Cameras are additionally situated outwardly of the gadget, and outwardly sweep to gather information about the encompassing territory. The gadgets take this data, which frequently figures out where encompassing physical items are found, and afterward defines a computerized model to decide proper yield. On account of Microsoft Holo-Lens, particular cameras perform particular obligations, for example, profundity detecting. Profundity detecting cameras work couple with two “condition understanding cameras” on each side of the gadget. Another regular kind of camera is a standard a few megapixels camera (like the ones utilized as a part of cell phones) to record pictures, recordings, and at times data to help with growth.

Processing
Augmented reality gadgets are fundamentally scaled down supercomputers pressed into modest wearable gadgets. These gadgets require critical PC preparing power and use a considerable lot of similar parts that our cell phones do. These segments incorporate a CPU, a GPU, streak memory, RAM, Bluetooth/Wi-Fi microchip, worldwide situating framework (GPS) microchip, and that’s just the beginning. Progressed augmented reality gadgets, for example, the Microsoft HoloLens use an accelerometer (to gauge the speed in which your head is moving), a whirligig (to quantify the tilt and introduction of your head), and a magnetometer (to work as a compass and make sense of which bearing your head is indicating) accommodate genuinely immersive experience.

Projection
While “Projection Based Augmented Reality” is a classification in-itself, we are particularly alluding to a small projector frequently found in a forward and outward-confronting position on wearable augmented reality headsets. The projector can basically transform any surface into an intuitive situation. As said over, the data taken in by the cameras used to look at the encompassing scene, is handled and after that anticipated onto a surface before the client; which could be a wrist, a divider, or considerably someone else. The utilization of projection in augmented reality gadgets implies that screen land will in the long run turn into a lesser critical part. Later on, you may not require an iPad to play a web based round of chess since you will have the capacity to play it on the tabletop before you.

Reflection
Mirrors are utilized as a part of augmented reality gadgets to help with the way your eye sees the virtual picture. Some augmented reality gadgets may have “a variety of numerous little bended mirrors” (as with the Magic Leap augmented reality gadget) and others may have a straightforward twofold sided reflect with one surface reflecting approaching light to a side-mounted camera and the other surface reflecting light from a side-mounted show to the client’s eye. In the Microsoft Holo-Lens, the utilization of “mirrors” includes transparent holographic focal points (Microsoft alludes to them as wave directs) that utilization an optical projection framework to shaft 3D images at you. A supposed light motor, transmits the light towards two separate focal points (one for each eye), which comprises of three layers of glass of three distinctive essential hues (blue, green, red). The light hits those layers and after that enters the eye at particular points, powers and hues, creating a last all-encompassing picture on the eye’s retina. Despite strategy, these reflection ways have a similar goal, which is to help with picture arrangement to the client’s eye.

3.2.2 Unity 3D
Unity 3D is a cross-stage 3D motor and an easy to use advancement condition created by Unity Technologies. Sufficiently simple for the novice and sufficiently effective for the master. Unity should intrigue anyone who needs to effectively make 3D recreations and applications for portable, work area, the web, and consoles.

3.2.2.1 Lean Touch
When you create mobile games you often want to make use of multi touch gestures, like pinch and twist. However, Unity makes this difficult to do, because they only provide the Input.touches array, requiring you to do all the calculations yourself.
With Lean Touch you no longer have to worry about these issues, because all the touch gesture calculations are done for you in a very simple and elegant way.

Lean Touch also allows you to simulate multi touch gestures on desktop, so you don’t have to waste lots of time deploying to your mobile devices while you setup your input.

Lean Touch is an input library that makes handling gestures (e.g. Pinch, Twist) incredibly easy across all devices and resolutions.

It combines mouse and touch inputs, so you only have to write your input code once, and it will work perfectly across desktop and mobile devices.

It accounts for varying DPI settings, so it will work the same on small and large devices.

It allows you to simulate multiple finger gestures (e.g. Pinch, Twist) using the mouse, so you can easily test your inputs without deploying to a device.

It allows you to hook into input events (e.g. OnPinch, OnFingerTap), or you can directly poll the current finger states.

It allows you to check to see if the mouse or any finger is on top of a GUI element, to prevent input from ‘passing through’ buttons (supports old and new GUI).

It comes with full source code that’s well commented, so you can easily modify or extend the features.

Figure 3.2.2.1: Lean Touch

Figure 3.2.1: Lean Touch

Figure 3.2.2.1: Lean Touch

Figure 3.2.2.1: Lean Touch Editor

Figure 3.2.2.1: Lean Touch Editor
Working
You can access all finger data from the Lean.Touch.LeanTouch class. The easiest way to begin is by hooking into the static events it exposes.

For example, if you want to perform an action when a finger touches the screen, you can hook into the Lean.Touch.LeanTouch.OnFingerDown event.
Figure 3.2.2.2: Scan Logo
Button Working

Script Button working
This event gets called every time a finger begins touching the screen, and gets passed a Lean.Touch.LeanFinger instance as aparameter. It’s recommended you hook into these events from OnEnable and unhook from OnDisable.

Figure 3.2.2.3: Back Button

Script Lean Touch
To see what other events are available, I recommend you read the LeanTouch.cs, script and its comments.

Another way to access finger data is to poll it directly from the Lean.Touch.LeanTouch.Fingers static list. This list stores all fingers that are currently touching the screen, and you can use this data at any time (e.g. Update) to quickly handle input.

Script Lean Touch
3.2.3 Vuforia
Vuforia is an Augmented Reality Software Development Kit (SDK) for cell phones that empowers the formation of Augmented Reality applications.
The Vuforia SDK underpins an assortment of 2D and 3D target composes including marker less Image Targets, 3D Multi-Target designs, and a type of addressable Fiduciary Marker known as a Frame Marker. Extra highlights of the SDK incorporate limited Occlusion Detection utilizing ‘Virtual Buttons’, runtime picture target choice, and the capacity to make and reconfigure target sets automatically at runtime 5.
It tracks “Picture targets” and straightforward 3D objects, for example, boxes, progressively. This enables designers to position virtual question, for example, 3D models, in connection to genuine pictures when these are seen through the camera of a cell phone. It gives the idea that the virtual question is a piece of this present reality scene. We are utilizing the comparable idea that our application identifies the objective set over the palm of the hand and a 3D model of arm is expanded on the genuine arm. There are a few restrictions in the Vuforia SDK, it doesn’t identify human body parts straightforwardly.
Vuforia, already known as QCAR, is a SDK for advancement of AR applications on cell phones, and is perfect with Android and iOS. Vuforia gives engineers the likelihood to utilize 2D and 3D markers, which fills in as reference focuses in the genuine physical world. These markers are then followed by the inherent following technique, which makes it simple to apply movements or other PC created realistic reacting to the markers. The organize framework will be focused at the marker, and turned by the marker’s introduction in the 3D condition.
The SDK was discharged in February 2012 in its new image, as it was refreshed from QCAR SDK 1.0 to Vuforia SDK 1.5. The framework design is as following; The engineer give target pictures to the Target Management Application, which stacks the objectives into the portable application Development Tools as assets. The assets are utilized by the usefulness in the QCAR Library, which again is called upon by the engineer.

3.2.3Vuforia System Overview
3.2.4 Android SDK
The Android Software Development Kit (SDK) was released in 2007 shortly after the Android platform was made public. The SDK was introduced to enable development of applications (‘apps’) to the new platform, and includes a debugger, software libraries, a handset emulator, sample code and tutorials. The Android SDK can be downloaded from http://developer.android.com/sdk/index.html, and is compatible with Windows, Mac OS, and Linux.

Figure 3.2.4: Unity Interface
3.2.4.1 Activity-lifecycle conceptTo navigate transitions between stages of the activity lifecycle, the Activity class provides a core set of six: callbacks: onCreate(), onStart(),onResume(), onPause(), onStop(), and  HYPERLINK “https://developer.android.com/reference/android/app/Activity.html” l “onDestroy()” onDestroy(). The system invokes each of these callbacks as an activity enters a new state.

Figure 3.2.4.1: Activity-lifecycle concept
3.2.4.2 Lifecycle callbacksProvides conceptual and implementation information about the callback methods used during the activity lifecycle.

onCreate()Must implement this callback, which fires when the system first creates the activity. On activity creation, the activity enters the created state. In the  HYPERLINK “https://developer.android.com/reference/android/app/Activity.html” l “onCreate(android.os.Bundle)” onCreate() method, you perform basic application startup logic that should happen only once for the entire life of the activity.onStart()When the activity enters the Started state, the system invokes this callback. The  HYPERLINK “https://developer.android.com/reference/android/app/Activity.html” l “onStart()” onStart() call makes the activity visible to the user, as the app prepares for the activity to enter the foreground and become interactive.
The  HYPERLINK “https://developer.android.com/reference/android/app/Activity.html” l “onStart()” onStart() method completes very quickly and, as with the Created state, the activity does not stay resident in the Started state. Once this callback finishes, the activity enters the Resumed state, and the system invokes the  HYPERLINK “https://developer.android.com/reference/android/app/Activity.html” l “onResume()” onResume() method.

onResume()At the point when the movement enters the Resumed state, it goes to the forefront, and after that the framework conjures the onResume() callback. This is the state in which the application communicates with the client. The application remains in this state until something happens to remove center from the application.

Such an occasion may be, for example, accepting a telephone call, the client’s exploring to another movement, or the gadget screen’s killing.At the point when an interruptive occasion happens, the action enters the Paused state, and the framework summons the onPause() callback.If the action comes back to the Resumed state from the Paused express, the framework by and by calls onResume() technique. Thus, you should actualize onResume() to introduce segments that you discharge amid onPause().onPause()The framework calls this technique as the principal sign that the client is leaving your movement (however it doesn’t generally mean the action is being demolished). Utilize the onPause() strategy to delay tasks such movements and music playback that ought not proceed while the Activity is in the Paused state, and that you hope to continue in the blink of an eye.You can utilize the onPause() strategy to discharge framework assets, for example, communicate collectors, handles to sensors (like GPS), or any assets that may influence battery life while your action is delayed and the client does not require them.onStop()At the point when your action is not any more obvious to the client, it has entered the Stopped state, and the framework summons the onStop() callback. This may happen, for instance, when a recently propelled movement covers the whole screen. The framework may likewise call onStop() when the action has got done with running, and is going to be ended. In the onStop() technique, the application should discharge all assets that aren’t required while the client isn’t utilizing it.

At the point when your movement enters the Stopped express, the Activity protest is kept occupant in memory: It keeps up all state and part data, however isn’t connected to the window administrator. At the point when the action continues, the action reviews this data. You don’t have to re-introduce parts that were made amid any of the callback strategies paving the way to the Resumed state.
The framework additionally monitors the present state for each View protest in the format, so if the client entered content into an Edit Text gadget, that substance is held so you don’t have to spare and reestablish it.

onDestroy()Called before the movement is annihilated. This is the last call that the movement gets. The framework either summons this callback in light of the fact that the action is completing because of somebody’s calling complete(), or on the grounds that the framework is incidentally decimating the procedure containing the action to spare space. You can recognize these two situations with the isFinishing() strategy.

The framework may likewise call this strategy when an introduction change happens, and afterward instantly call onCreate() to reproduce the procedure (and the parts that it contains) in the new introduction. The onDestroy() callback discharges all assets that have not yet been discharged by before callbacks, for example, onStop().3.3SummaryIn this chapter system implementation is explained in detail. It includes explanation about tools and techniques used to complete the task, and the explanation of algorithms. Understanding of a technical specification or algorithm as a program, software component, or other computer system through programming and deployment.Chapter 4METHODOLOGYThe System implementation is an understanding of a technical specification or algorithm as a program, software component, or other computer system through programming and deployment. It includes explanation about techniques used to complete this task, and the explanation of algorithms used.

4.1Design of the investigationWe utilize dexterous model for the usage of venture so we have space for changes since with time, innovation is progressing. At the point when new changes are should have been executed, the flexibility deft provides for change is vital. New changes can be executed at almost no cost in light of the recurrence of new additions that are created.
To satisfy all necessities for a given expanded reality framework, the establishment is to assess the position and introduction of the camera in regard to the world edge or the other way around. To do this we utilize two picture based following methods: Marker based following and Natural Feature following..4.1.1 System Architecture
The System Architecture Diagram can be seen in the figure below. This diagram gives the overview of the system.

Users
Front End
Scan Logo
Vuforia API
Image Detection
Image Recognition
Database

Figure 4.1.1: System Architecture

4.1.2 Data Flow Diagram
The Data Flow Diagram can be seen in the figure below. This diagram gives the brief description of the system.

Figure 4.1.2: Data Flow Diagram

4.1.3 Sequence Diagram
The Data Flow Diagram can be seen in the figure below. This diagram gives the brief description of the system.

Figure 4.1.3: Sequence Diagram
4.1.4 State Transition Diagram
The State Transition Diagram can be seen in the figure below. This diagram gives the brief description of the system.

Figure 4.1.4: State Transition Diagram

4.2Procedures
4.2.1 Marker Based Tracking
Marker following influences utilization of the camera to picture to discover optical square markers and gauge their relative posture to the camera. A square marker comprises of a dark square with a white fringe with a predefined measure. Inside the square the ID of the marker is encoded. Distinctive methods can be utilized to encode the ID like layout coordinating or encoding as a twofold number.
In the wake of getting the picture from camera is changed over to dim scale picture to accelerate the picture preparing calculation, edge discovery and enrollment. By utilizing the camera picture as the foundation in our show and utilizing the posture of the marker we can superimpose the camera picture and virtual question. At whatever point the marker or the camera is moving, the increase remains on the marker 4.

4.2.2 Natural Feature Tracking
Characteristic Feature Tracking is a picture based following procedure that recognizes and tracks the highlights that are normally found in the picture itself. These could be corners, edges, and so forth., without utilizing particularly composed ID markers. There are a few Natural Feature Tracking methodologies such SIFT, SURF.
Utilizing SIFT there was under 1% of following disappointments. With SURF in 48% of the genuine and 22% of the manufactured grouping outlines following disappointments happened. This can be effortlessly comprehended and seeing the normal number of matches acquired with SURF.
They contrast for the most part by picture includes that are related between the picture and a model of the question or condition to be followed. In any case, the following pipeline can be portrayed as appeared in Figure. For moment. “Key point recognition” is typically proficient with a quick corner indicator to figure include focuses in the camera picture 5.

4.2.2Natural Feature Tracking Pipeline
As following system, we utilized Vuforia SDK picture targets. We made our own particular picture target 11. The picture utilized as an objective and highlights are removed from it. The augmentable rating of such picture was equivalent to 5 out of a scope of 0 to 5, this implies the picture is effectively followed by the proposed AR framework. We at that point built up our Augmented Reality application utilizing Unity3D programming, bringing in the following database produced in the Vuforia engineer entrance.
We took likewise the upside of the “broadened following” alternative, which empowers our application to have a nonstop affair regardless of whether the objective stays obvious in the camera field of view. As the objective leaves see, Vuforia utilizes other data from the earth to deduce the objective position by outwardly following nature.
Vuforia constructs a guide around the objective particularly for this reason and expect that both the earth and target are to a great extent static.

4.3Implementation procedureMain development environment used in this project is Unity, using Vuforia SDK. Language that is used is C# for scripting in unity. Edraw Max is used for making diagrams which are used in project documentation.

4.4SummaryIn this chapter system implementation is explained in detail. It includes explanation about tools and techniques used to complete this task, and the explanation of algorithms used. We then developed our Augmented Reality application using Unity3D software, importing the tracking database generated in the Vuforia developer portal.Chapter 5SYSTEM TESTINGIn this chapter we discuss about the system performance and evaluation. Evaluation of system or application is done by using various system testing techniques. This is the main part of application as it provides validation about working of the systems and the requirements of the system. To find out if any errors exist or proper exception handling is included. We will discuss a few testing techniques in this chapter.

5.1 Functional RequirementsView Deals Customer can browse to default deals and other specific deals from different restaurant.
Check price System will display the details of prices with the respect to each menu item.

Manage Deals record System will have the database which will contain all the record of the latest deals going on.

Manage Restaurants Information System will manage restaurant information in the database.

5.1. Functional Requirements
5.2 Non-Functional Requirements5.2.1 Objective testingThe main goal of the objective testing is to ensure the quality of software delivered. Major goal of objective testing is to test whether all the components of the system functionality correctly?

5.2.1 Front Interface
5.2.2 Usability TestingUsability testing is the test by the audience. This testing gives the feedback to the developer that how much time user needs to perform the desired task in the application i.e. what is the response time of the system. It shows whether the user is satisfied with the performance of this application or not. This application is useful for administrators to know what their users are doing. It helps to monitor their activities.

5.2.2.1 User Friendly Interface
Interface of this application is user friendly. Users can easily understand the purpose of the buttons designed in the system. No need to learn or consult a user guide as it is simple.

Fig. 5.2.2 Application Interface
5.2.2.2 Easy To Use
User can easily understand how to use the application to do desired task and get desired results from application. Just run the application and click on scan logo and get you desired menu details.

Fig. 5.2.3 Restaurant Interface

Fig. 5.2.3 Restaurant Menu

Fig. 5.2.4 Restaurant Interface

Fig. 5.2.4 Restaurant Menu

Fig. 5.2.5 Restaurant Logo

Fig. 5.2.5 Restaurant Menu

Fig. 5.2.6 Welcome Page

Fig. 5.2.6 Restaurant Logo
Detect Markers.

Logo edges and match with Data set.

If logo matches the edges and cornes of object then it will show menu details.

Next figure will show you the result .

Fig. 5.2.6 Restaurant Menu
5.2.2.3 Easy To Learn
Our application is very simple, new user can easily learn and understand how to use the application.

5.3 Software Performance TestingThe application is installed in different android versions such as 4.0 (Ice Cream Sandwich), 5.1 (Lollipop) and 6.0 (Marshmallow). There was a clean install and no such error occurred. On the other hand, this application is not for the users who are using android versions less than 4.0 (Ice Cream Sandwich).
Now a day, no mobile phone company sell their product with old android versions as well as the existing applications are also not compatible with the old versions. With the advancement of technology, nobody likes to use old technology that is why this application is not designed for the old android versions.
5.4 Compatibility TestingThis testing is used to ensure the compatibility of the application. It tests the application for those hardware/software resources are compatible for this application and what operating system is required to run this application.
In simple words, it is the type of testing in which we;
Verify whether the user interface or the user experience is same across chrome, IE, Firefox browsers. And also across iPad, iPhone and many Android devices.

Verifying if CSS is stable across these browsers and devices.

Verifying the functionality is working the same as on the baseline platform or device.

The application is compatible with the 4.0 (Ice Cream Sandwich) android version or higher. This application will not work for the users who are using android version older than 4.0(Ice Cream Sandwich).

For compatibility testing we divide the resources according to category which is given below:
Test Case 1 Compatibility testing.

Description Run the application on different android versions.

Requirements Android version must be greater than or equal to 4.0 (Ice cream Sandwich)
Table 5.1 Test case 1(Compatibility testing)
Testing Results
In testing results we show the behavior of application on different versions of Android.

Tasks Expected Result Actual Result
Android version 1.6 (Donut) Pass / Fail Fail
Android version 2.1 (Eclair) Pass / Fail Fail
Android version 2.2 (Froyo) Pass / Fail Fail
Android version 2.3 (Gingerbread) Tested Pass / Fail Fail
Android version 3.0 (Honeycomb) Pass / Fail Fail
Android version 4.0 (Ice Cream Sandwich) Tested Pass / Fail Pass
Android version 4.1 (Jelly Bean) Tested Pass / Fail Pass
Android version 4.4 (Kit Kat) Pass / Fail Pass
Android version 5.0 (Lollipop) Pass / Fail Pass
Android version 6.0 (Marshmallow) Tested Pass / Fail Pass
Android version 7.0 (Nougat) Pass / Fail Pass
Table 5.2 Test case 1(b)
5.5 Installation TestingThis system is tested on an android device containing 6.0 (Marshmallow) android version. The device has 5.5-inch display and 1080×1920 resolution with 3GB RAM and Kirin 655 Octa-Core 4x 2.1 GHz + 4x 1.7 GHz processor (Honor 6X).

The application is installed in different android versions such as 4.0 (Ice Cream Sandwich), 5.1 (Lollipop) and 6.0 (Marshmallow). There was a clean install and no such error occurred.
On the other hand, this application is not for the users who are using android versions less than 4.0 (Ice Cream Sandwich). Now a day, no mobile phone company sell their product with old android versions as well as the existing applications are also not compatible with the old versions. With the advancement of technology, nobody likes to use old technology that is why this application is not designed for the old android versions.

Test Case 1 Installation
Description Application will be installed and have no error.

Requirements Android version must be greater than or equal to 4.0 (Ice cream Sandwich)
Table 5.3 Test case 1(a)
Tasks Expected Result Actual Result
Runs on Android 4.0 (Ice Cream Sandwich) Pass / Fail Pass
Runs on Android 5.1 (Lollipop) Pass / Fail Pass
Runs on Android 6.0 (Marshmallow) Pass / Fail Pass
Table 5.4 Test case 1(b)
5.6 Test CasesTest cases are used for the step by step evaluation of system, from installation to desired results or outputs from the system. Given below are some of the test cases of our application which are helpful to evaluate the overall performance of application.

5.6.1 Test Case for Compatibility
Test Case 1 Application startup.

Description The application displays the “Startup screen” when executed and let the user to scan the target.

Requirements Application is installed on the android device.

Table 5.5 Test case 1(a)
Tasks Expected Result Actual Result
Application start Pass / Fail Pass
Start-up screen display Pass / Fail Pass
Table 5.6 Test case 1(b)
5.6.2 Test Case for Target Scanning
Test Case 2 Scan target.

Description Place camera over the target to scan the defined patterns.

Requirements Application was successfully executed and camera is in working state.

Table 5.7 Test case 2(a)
Tasks Expected Result Actual Result
Target Scan Pass / Fail Pass
3D marker points Pass / Fail Pass
Table 5.8 Test case 2(b)
5.6.3 Test Case for Image Marking
Test Case 3 Mark image.

Description Drag the maker over logo to mark it for cut.

Requirements Touch (gestures) works properly.

Table 5.9 Test case 3(a)
Tasks Expected Result Actual Result
Drag and drop marker Pass / Fail Pass
Logo marked Pass / Fail Pass
Table 5.10 Test case 3(b)
5.6.4 Test Case for Logo Cutting
Test Case 4 Cut logo.

Description Drag the maker over image to mark it for cut.

Requirements Touch (gestures) works properly.

Table 5.11 Test case 4(a)
Tasks Expected Result Actual Result
Drag and drop marker Pass / Fail Pass
Marker points Pass / Fail Pass
Table 5.12 Test case 4(b)

5.6.5 Test Case for Logo Matching
Test Case 4 Logo matching.

Description Drag the maker over image to match points with database images.

Requirements Touch (gestures) works properly.

Table 5.13 Test case 5(a)
Tasks Expected Result Actual Result
Drag and drop marker Pass / Fail Pass
Points matching Pass / Fail Pass
Table 5.14 Test case 5(b)
5.6.6 Test Case for Menu Button
Test Case 4 Show menu.

Description Show menu button show menu after successful image search.

Requirements Touch (gestures) works properly.

Table 5.15 Test case 6(a)
Tasks Expected Result Actual Result
Drag and drop marker Pass / Fail Pass
Show menu Pass / Fail Pass
Table 5.16 Test case 6(b)

Chapter 6RESULTS AND CONCLUSIONThe project “Deal Hunter” application is a successful effort and is about to help the user to get instant access of Fast food restaurants. It’s an augmented reality based application that will provide instant access of latest deals, promotions and menu on a user’s Smartphone.

After developing the project, we understand that what basically augmented reality is and how it works. How the augmented reality is implemented on mobile application. We have learned how the system testing is done on mobile application, specially applications in which augmented reality is involved.

It has been great learning experience for us as developers. This application is very simple and easy to understand for every user. People can easily use this application because it has a very user-friendly interface, hence there is no need to have any special training or expertise to use it.

6.1Presentation of the findings
A general description of results of the project/thesis.

6.1.1Software resultsAgain split the project in different parts discuss the results for each part.

6.2Discussion of the findings
Availability:
The system will be able for extension as per the future requirements.

Elasticity:
The system will be flexible for some later requirements change or features enhancement.

Usability:
The interface and GUI design of the system will be user friendly.

Dependability:System will not be dependent on any other external system or source.Maintainability:The system will be easy to use and should have user friendly interface.6.3 Limitations
Our project/application is for android users and residents of Islamabad only. Those who have android Smartphone’s will easily download the application from Google Play Store. The scope of the project is limited to those fast food restaurants which exists only in Islamabad which have their own websites because the idea behind this application is that a user will capture an image of the sign board or logo of the restaurant and in our application, it will match the patterns/text of the image with the database in which we have already inserted the value.6.4SummaryIn this chapter we discuss about the system performance and evaluation. Evaluation of system or application is done by using various system testing techniques. This is the main part of application as it provides validation about working of the systems and the requirements of the system.

To find out if any errors exist or proper exception handling is included. We will discuss a few testing techniques in this chapter.

6.5 ConclusionAfter completion of project, we understand that what basically augmented reality is and how it works. How the augmented reality is implemented on mobile application. We have learned how the system testing is done on mobile application, specially applications in which augmented reality is involved.

It has been great learning experience for us as developers. This application is very simple and easy to understand for every user. People can easily use this application because it has a very user-friendly interface, hence there is no need to have any special training or expertise to use it.

Chapter 7FUTURE WORKAfter fulfilling the current and basic requirements, we are going to add other modules like reservation of the table inside the restaurant, QR-code integration, landline and website accessibility, making all the restaurants available on the application because we are taking only few restaurants for fulfilling our current requirements and other affective features of application.

REFERENCES
Jacobson Ivar, Christerson Magnus, JonssonPatrik, Övergaard Gunnar, Object-Oriented Software Engineering – A Use Case Driven Approach, Addison-Wesley, 1992.

General Services Administration document “Federal Standard 1037C”.

HannuJaakkola and Bernhard Thalheim. (2011) “Architecture-driven modelling methodologies.” In: Proceedings of the 2011 conference on Information Modelling and Knowledge Bases XXII. AnneliHeimbürger et al. (eds). IOS Press. p. 98.

Graham, M., Zook, M., and Boulton, A. “Augmented reality in urban places: contested content and the duplicity of code.” Transactions of the Institute of British Geographers, DOI: 10.1111/j.1475-5661.2012.00539.x 2012.

“Developing with Vuforia”. Vuforia Developer.

Riccitiello, John (October 23, 2014). “John Riccitiello sets out to identify the engine of growth for Unity Technologies (interview)”. VentureBeat (Interview). Interview with Dean Takahashi. Retrieved January 18, 2015.

“tutorialspoint,”Online.Available: https://www.tutorialspoint.com/uml/uml_activity_diagram.htm. Accessed Friday March 2017.

Disqus, Online. Available: http://www.uml-diagrams.org/package-diagrams.html. Accessed Monday March 2017.

ISTQB, “What is Agile model – advantages, disadvantages and when to use it?,” ISTQB EXAM CERTIFICATION, vol. 1, p. 2, 2015.

“Wikipedia,”Wikipedia,Online.Available: https://en.wikipedia.org/wiki/Programming_tool#References. Accessed Monday January 2017.

Vuforia developer target manager to manage databases. Available: https://developer.vuforia.com/target-manager