User Tools

Site Tools


results

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

results [2019/11/07 14:25]
montse
results [2024/11/19 12:40] (current)
montse [Example of Evidence Document]
Line 1: Line 1:
-====== ​First steps in TAST ======+====== ​Results ​======
  
  
 +This option of TAST allows the user ** to see the results of the Test Cases executed. **
  
-TAST (Test Automation System Tool) is a tool for automating the testing. It is based on UML diagrams. 
-It has a user-based system, where every user has a role and, depending on this role, can perform different actions. 
  
-===== Login and Main Menu =====+A list of Test Sets executed for a specific domain ​and project is displayed, with information about execution times and status. It can be filtered by: 
 +   * Domain. 
 +   * Project. 
 +   * Test Set. 
 +   * User.
  
-The first screen seen when accessing to the tool is the one here below, where the user needs to log inin order to get into its personal TAST view. This login can be done with the same credentials than for accessing Windows (i.e. LDAP number N000000 or XI000000 and password).+The list of test setswill also show the following information ​for each Test Set:
  
-{{:pantalla_login_eng.png?​nolink&​250|}}+   * Name of the Test Set. 
 +   * Diagram name / Test case data name. 
 +   * Starting time of the Test Set execution. 
 +   * Finishing time of the Test Set execution. 
 +   * Status: is the result of the execution (OK or KO). 
 +   * Actions: with the following indicators:​ 
 +      * Updated to ALM: YES or NO. 
 +      * Download evidenceusing this option you can store in a PC directory the evidence of test cases executed 
 +      * Path of the Test Case Data. 
 +      * Download evidences document
  
-After clicking the "Sign in" button, the first webpage seen is the DASHBOARD ​of the toolwhere the user can see the main useful information about the tool and its last use.+===== Upload results to ALM ===== 
 +Sometimes you may want to optimize your time by running your tests without uploading to ALMwith the idea of uploading them later on, when you´re satisfied with the resultsor just because you want to upload them while you have lunch. The option "​**Upload results to ALM**" on the Top Right corner will help you to organize your schedule and decide when and what you want to upload to ALM.
  
-{{:​menu1_eng.png?​nolink&​250|}} +This option allows to upload to ALM a Test Set, or a group of themYou can use the data fields to filter ​the results list. If you want to upload a set of TS as a result of your filteryou can check-in ​the "​**Select all**" check/box and all the elements displayed will be selected. The check box is located on the left side of the List headers\\ 
-   +
-For switching between ​the different functionalities of the applicationthe user can use the menu at the left side of the screenThis menu can be presented in two ways (collapsed or expanded). In this menu, can be selected the following options: ​+
  
-  ​Dashboard. +**IMPORTANT**: Only the visible elements will be selected; this means that if the result of the filter uses more than one page, only the visible page will be affectedWould you want to select more TS, you can navigate to the next screen and select the rest of TS, the previous selection will remain active unless you check-out the "​Select all" option.
-  ​Settings. +
-  ​Customize.  +
-      ​* Domains & Projects. +
-      * Projects & Users. +
-  * User Roles. +
-  * Infrastructure. +
-  * Modeling. +
-      * UML Diagram. +
-      * Data Management. +
-  * Running. +
-      * TS Management. +
-      * Planning.  +
-  * Results.+
  
-The button '​Modeling'​ will drive the user to the page that allows ​to create UML diagrams.+Once you are happy with the selection, press the "​Upload results ​to ALM" button.\\  ​
  
-To logout, the user needs to click on its user (upper right) and then will see the Logout button+{{:​en:​tast-upload-results.png?​2600|}}\\ ​
  
-===== HTML Client ===== +A new screen will be displayed ​to define ​the parameters of the upload.\\ \\  
-  +{{:​en:​results-upload.png?2600|}}
-The function of the TAST HTML client is to execute ​the test cases and test sets.+
  
-==== Download ====+All fields must be fulfilled before the upload is allowed: 
 +  * Test Set Name (Manual input). 
 +  * ALM Domain (A dropdown box allows to select the Domain). 
 +  * ALM Project (A dropdown box allows to select the Project). 
 +  * Test Plan Folder (A pop-up window will show the Folder Tree corresponding to the Domain/​Project). 
 +  * Test Lab Folder (A pop-up window will show the Folder Tree corresponding to the Domain/​Project/​Test Plan).
  
-To be able to use this tool, the first step is to download it from TAST, through the //Client// option that appears in the upper right part of the application.+=====Upload results ​to ALM step by step=====
  
-{{:client_eng_1.png?nolink&​1000|}}+To upload to ALM step by step we have to go to Results in TAST then choose the test set we want to upload and click the button “upload to ALM“ as we explain in the last part (Upload to ALM).\\ \\  
 +Once the pop-up appears, we have to select the checkbox of „“Step by step“ if we want to activate this new functionality and save the results by steps in ALM.\\ \\ {{ :en:​pasitoapasoeng.png?600 |}}
  
-The HTML client can be downloaded ​in two formatsexe and jar.+Then configure the upload to ALM. Once is uploaded, open ALM and search for the test we have uploaded. After that we choose the execution we want, for example like in the following image.\\ \\ {{ :en:​alm1.png?​600 |}}\\ \\  
 +Consecutively in the “new test instance“ details window we have to click in runs and then choose the run ID (and click it), for example:\\ \\ {{ :en:alm2.png?600 |}}
  
-{{:client_eng_2.png?nolink&​1000|}}+Finally we are in “run details“ in this window we have to click in steps and then we will see al the documented steps uploaded from the executed tcd. If a step comes with a clip a screenshot is attached to it.\\ \\ {{ :en:alm3.png?600 |}}\\ 
  
-  * With the **'​.exe'​ option**, a '​tast.zip'​ file is downloaded, which we will have to save somewhere in our computer, and extract the file that appears inside the zip. 
  
-This file has the format: //​tast_vn.nn.nn//,​ where '​n.nn.nn'​ is the version of the HTML client (example: tast_v1.20.23.exe). 
  
-  * With the **'​.jar'​ option**, a '​tast.jar'​ file is downloaded, which we will have to save in the path: 
  
-//​C:​\AppTast//​+===== Results Evidences Document =====
  
-To be able to use //tast.jar// we must also have in the path //​C:​\AppTast// ​the file //TAST.cmd//.+From the Results page, you can access ​to the Results Evidence DocumentYou must click in the Document Evidence button on the "​Actions"​ column.\\ \\ {{:​en:​results00.png?​nolink|}}\\ \\ 
  
-==== Functionality ==== 
  
-The first screen seen when accessing to the HTML Client is the one here below, where the user needs to log in, in order to get into its personal TAST HTML Client. This login can be done with the same credentials than for accessing TAST (i.e. LDAP number N000000 or XI000000 and password). 
  
-{{:Client_login_eng_1.png?​nolink&​500|}}+==== Example of Evidence Document ==== 
 +This is an example of the Results evidence document:​\\ ​{{:web_services_15.11.2024_06.41.34_15.11.2024_06.41.35.docx|}}\\ 
  
-After clicking the “Login” button, some of the following messages will appear, depending on whether the HTML client version is the last or not: 
  
-1- //HTML client version ​is the last://+==== Structure of the Evidence Document ==== 
 +The structure of the Results Evidence Document ​is the following:
  
-{{:​Client_login_eng_4.png?​nolink&​300|}}+**__Introduction__** 
 +  * Test Set ID. 
 +  * Name of Test Case. 
 +  * Number of steps. 
 +  * Final Result of the Test Set. 
 +  * Description of the Test Case.  
 +  * Result Details. 
 +  * Link to Diagram. 
 +  * Link to Test Case Data. 
 +  * Link to Test Set.
  
 +(for each step)
 +**__Execution Steps__**
 +  * Step number.
 +  * Mapped element (message).
 +  * Parameters.
 +  * Description.
 +  * Expected Result.
 +  * Actual Result.
 +  * Step Result.
  
-  * Clicking '​**Yes'​** button, all the executions (pending or executed) of the logged-in user will appear. 
  
-{{:​Client_login_eng_2.png?​nolink&​800|}}+==== Download of the Evidence Document ====
  
-  * Clicking '​**No'​** button, all executions of the HTML client will be deletedappearing ​the empty table.+It is possible to download the evidence document by clicking the icon into the execution result and saving the document in the desired pathas presented in the following image:\\ \\ {{:​en:​evidence_document_example.png?800|}}
  
-{{:Client_login_eng_5.png?​nolink&​700|}}+**IMPORTANT:** There are cases, that due to the dimension of the evidences, the document generation takes some time and a warning message appears, informing you that you should click again the download button for properly downloading the evidence document.\\ \\ If the screenshots are too small and you can’t see them correctly, you should create a message executeJavaScript after the openUrl with the following content: 
 +**document.getElementById(“main.cntPrincipal”).style.zoom = 1.5;**\\ \\ 
  
-2- //HTML client version is not the last:// 
  
-{{:​Client_login_eng_3.png?​nolink&​300|}} 
  
-   * Clicking '​**Accept**'​ button, the window of section 1 will appear, with all the executions (pending or executed) of the logged-in user will appear.+=====Acceptance Results=====
  
 +==== Introduction====
 +The acceptance results are the results of a test (evidences and result documents) but stored in a way they did not get deleted.\\ \\ 
 +The time limit to generate the acceptance is 30 days, once the acceptance is created, it last forever. Also the acceptance is generated with independence with the different Test Case Data (TCD) of the same Test Set (TS). That means an acceptance will be created for each TCD.
 +====Features====
 +In the result table, a new column called “Acceptance” has been added. That column can have three states.
 +  * **Empty square:** Initial state, an acceptance has never been created before for that result. Tooltip “No Acceptance”.\\ ​
 +  * **Green check:** The acceptance has been created correctly. Tooltip “Success Acceptance”.\\ ​
 +  * **Red cross:** Trying to create an acceptance some problema appeared and the proccess is not completed. Tooltip “Error Acceptance”.
  
-At the top of the main window of the HTML client, will appear three options:+====Funtionality ​of this buttons====
  
-  * Refresh+  * **Empty square:** If we click this button a modal window appears, and we can create the acceptanceThat modal window warns us the possibility of a diagram that already have an acceptance with the phrase: “If this diagram already has an acceptance created it will be replaced”.\\ \\  If the acceptance is accepted it will be created. It will also replace any other acceptances that could have been created in other results of the same diagram. So that for every diagram will only exist one acceptance.\\ \\  
-  * Delete+  * **Green check:** If we click this button a modal window appearsThis modal window contains links to the documents generated by the acceptance. Those documents are “result documents” and “evidences”. To obtain those documents there are four buttons:​\\ ​  
-  * Schedule.+      -  Button to copy the URL where is saved the “result documents” of that diagram. \\ 
 +      -  Button to download directly from the navigator the “result documents” of that diagram.\\  
 +      -  Button to copy the URL where are saved the “evidences” of that diagram.\\ ​  
 +      -  Button to download directly from the navigator the “evidences” of that diagram.\\ \\    
 +  * **Red cross:** If a problem occurred while creating a result and it cannot be created al last, it is indicated with this icon. If we click this button, the modal window of the acceptance creation will appear.\\ \\ In the case of create a result and accept the modal window from the “no acceptance” or “error acceptance” icon the state will change to “success acceptance” if it fails, the status will be “error acceptance” ​
  
-==== Refresh Option ​====+====Control of errors==== 
 +  * If we try to download documentation or evidences that do not exist from the URL a new window with the error 400 or 404 will appear indicating that the resource was not found. 
 +  * If we try to download documentation or evidences that are not created yet from the URL a new window with the error 409 will appear indicating that the resource is still in creation and to try again later. 
 +  * That will happen trying to open them from the URL, If we try from the modal window a notification will appear with the same errors.
  
-The HTML client window is updated with all the executions that are in that moment, adding: 
-  * all the Test Cases launched from the '​Launch Validation'​ option of [[en:​edit_diagram:​edit_a_diagram_for_its_update_double_click|Modeling]]. 
-  * all the Test Sets launched from the '​Execute Test Set' option of [[en:​running#​test_sets_window|Test Set Management]]. 
  
-==== Delete Option ==== 
  
-Allows to eliminate the selected test cases and test sets. 
  
-{{:​Client_login_eng_6.png?​nolink&​800|}} 
  
-==== Schedule Option ==== 
- 
-When clicking on this option, the planned executions that are ready to be executed will appear in the window (according to [[en:​running#​planning|Planning]] functionality). 
- 
-==== Content of the HTML Client Table ==== 
- 
-In this table appears the following information:​ 
- 
-  * **Test Set name:** the name of the Test Case or Test Set. 
-  * **Exc.Time(ms):​** run time of the Test Case or Test Set, expressed in milliseconds. 
-  * **Step:** it is used to indicate the step in which you want to stop the execution. ([[en:​running#​debug_mode_execution_in_client_for_tc_validation|Debug Mode]]). 
-  * **Debug:** the two icons are only enabled when [[en:​running#​debug_mode_execution_in_client_for_tc_validation|Debug Mode]] is used. 
-  * **Action:** there are three possible actions: 
-     * //execute the test case / test set:// can be used when the icon{{:​Client_login_eng_8.png?​}}appears. 
-     * //stop the test case / test set:// once the execution has started (the icon{{:​Client_login_eng_9.png?​}}appears),​ it can be stopped by clicking on that icon. When the execution stops, will be shown the following window:\\ \\ {{:​Client_login_eng_7.png?​nolink&​300|}} 
-     * //restart the test case / test set:// once the execution has been paused (appears {{:​Client_login_eng_10.png?​}}in //​Status//​),​ it can be restarted by clicking on the{{:​Client_login_eng_8.png?​}}icon. 
-  * **Status:** shows the status of the executions. It can contain the values:\\ 
-{{:​Client_login_eng_11.png?​}} the test case / test set has not yet been executed.\\ 
-{{:​Client_login_eng_12.png?​}} the test case / test set has been executed and finished ok.\\ 
-{{:​Client_login_eng_13.png?​}} the test case / test set has been executed and finished ko.\\ 
-{{:​Client_login_eng_10.png?​}} the test case / test set has been pause.\\ 
-{{:​Client_login_eng_14.png?​}} when the evidences can not be uploaded and recorded correctly in the NAS where the results are stored. 
-  * **Logs:** clicking on this option, the client opens an explorer window with the path where are the following files: 
-    * logs of the execution, 
-    * screenshots (if previously indicated), 
-    * other files (if proceeds) result of the execution: excel, scripts,... 
results.1573136754.txt.gz · Last modified: 2019/11/07 14:25 by montse