Posts

Showing posts with the label SAP BODS

Reverse Pivot in BODS

Image
  Reverse Pivot Transform in BODS: This transform converts rows into columns. It will group the dataset of different rows into a single row with different columns. Observe the icon, it says that will convert rows to column.

Date Generation transform in BODS

Image
  See more in our ebook This is very useful transform for creating Time dimension tables. It generates dates incremented as you specify. Options:

Key Generation in BODS

Image
  See more in our ebook   Key_Generation To generate artificial keys in Data Integrator,  you can use either  Key_Generation Transform or Key_Generation Function. It fetches max. existing key value from the table and uses it as a starting value. Based on this start key, transform/function increments the key value for each row.

Table Comparison in BODS

Image
  Table Comparison: This is an important functionality in BODS. It compares two data sets and produces the difference between them as a data set with rows flagged as INSERT, UPDATE, or DELETE. The Table_Comparison transform allows you to detect and forward changes that have occurred since the last time a target was updated.  For those in BW, you can understand the importance of this transform if you have done full and delta loads to DSOs and Cubes. Data Inputs

Data_Transfer Transform in BODS

Image
  See more in our ebook This Transformation helps us in transferring the data in an effective mode.  Using this we can push the operands into the data base (like Group by or Order by on the database table). Example: - Assume we are doing a lookup on the data and a group by on the same data in the same dataflow. If we are doing that on millions of records, it’s a performance hit. So if we use data transfer transformation, the data flow is split and runs as separate dataflows for lookup and also as separate dataflows for the group by/ orders by.

History Preserving Transform in BODS

Image
 This Transformation is used to preserve the history of the Data. Suppose there is a customer, whose address is changed from NYK to LA. Then,   after some time he moved to Texas . If the table is updated with new location or address, all the customer's  past locations will be overwritten. There is a need to preserve the history of customer's location to analyze the old data. Using the History preserving transformation all the history data can be saved. To apply the it, you need to have a table comparison done prior to this transformation-

Data Quality Match Transformation in SAP BODS

Image
  See more in our ebook Match transformation is used to identify the duplicates in the data based on the match criteria and a weighted score. This transformation is used to determine the duplicates and consolidate them. Using this transform we can-

Hierarchy Flattening in BODS

Image
  See more in our ebook   This constructs a complete hierarchy from parent/child relationships, and produces a description of the hierarchy in vertically or horizontally flattened format.

XML Pipeline in BODS

Image
  See more in our ebook   Processes large XML files of a nested structure in small instances. With this transform, Data Services does not need to read the entire XML input into memory and build an internal data structure before performing the transformation. An NRDM structure is not required to represent the entire XML data input. Instead, the XML_Pipeline transform uses a portion of memory to process each instance of a repeatable structure, then continually releases and reuses memory to steadily flow XML data through the transform.

Map CDC Transform in BODS

Image
  See more in our ebook Using this transform's input requirements (values for the Sequencing column and a Row operation column),  you can perform three functions:

SAP Data Services Designer tool

Image
  See more in sap bods step by step This post gives you a short overview of the Data Services product and terminology. Refer to the post SAP BO DATA Integrator / Data Services for more details. Data Services Components The following diagram illustrates Data Services product components and relationships-

Creating Datastore

Image
  See more in sap bods step by step This Post describes following: All the preparatory work needed to define data movement specifications for a flat-file data source to a target data warehouse. In this post you will: • Define a datastore from Data Services to your target data warehouse • Import metadata from target tables into the local repository so that you can use the Designer to work with these tables • Define file formats and a connection to flat-file source data You can refer to the post  Setting up the system in BODS  for more details.

Debugger in SAP Data Services

Image
   See more in sap bods step by step ebook This post describes on how to use debugger in Data Services. Using the interactive debugger The Designer includes an interactive debugger that allows you to examine and modify data row by row by placing filters and breakpoints on lines in a data flow diagram. A debug filter functions as a simple query transform with a WHERE clause. Use a filter to reduce a data set in a debug job execution. A breakpoint is the location where a debug job execution pauses and returns control to you. This exercise demonstrates how to set a breakpoint and view data in debug mode.

Joins and Lookup in SAP Data Services

Image
  This post discusses on join conditions and look up in Data Services. Populating the Sales Fact Table from Multiple Relational Tables The exercise joins data from two source tables and loads it into an output table. Data Services features introduced in this exercise are: • Using the query transform FROM clause to perform joins • Adding columns to an output table • Mapping column values using Data Services functions • Using metadata reports to view the sources for target tables and columns In this exercise, you will: • Populate the SalesFact table from two source tables: • Table SalesItem - columns Cust_ID and Order_Date • SalesOrder - columns Sales_Order_Number, Sales_Line_Item_ID, Mtrl_ID, and Price. • Use the FROM clause in the Query transform to join the two source tables and add a filter to bring a subset of sales orders to the target. Populating the Sales Fact Table from Multiple Relational Tables • Use the LOOKUP_EXT() function to obtain the value for the Ord_status column from

CDC in BODS

  See more in our ebook Changed-Data Capture This post introduces the concept of changed-data capture (CDC). You use CDC techniques to identify changes in a source table at a given point in time (such as since the previous data extraction). CDC captures changes such as inserting a row, updating a row, or deleting a row. CDC can involve variables, parameters, custom (user-defined) functions, and scripts. Exercise overview You will create two jobs in this exercise. The first job (Initial) initially loads all of the rows from a source table. You will then introduce a change to the source table. The second job (Delta) identifies only the rows that have been added or changed and loads them into the target table. You will create the target table from a template. Both jobs contain the following objects. • An initialization script that sets values for two global variables: $GV_STARTTIME and $GV_ENDTIME • A data flow that loads only the rows with dates that fall between $GV_STARTTIME a

Recoverable workflow in BODS

Image
  See more in our ebook   This post describes on how to: • Design and implement recoverable work flows. • Use Data Services conditionals. • Specify and use the Auto correct load option. • Replicate and rename objects in the object library. Recovery Mechanisms Creating a recoverable work flow manually A recoverable work flow is one that can run repeatedly after failure without loading duplicate data. Examples of failure include source or target server crashes or target database errors that could cause a job or work flow to terminate prematurely. In the following exercise, you will learn how to: • Design and implement recoverable work flows • Use Data Services conditionals • Specify and use the auto-correction table loader option • Replicate and rename objects in the object library. Adding the job and defining local variables 1. In the Class_Exercises project, add a new job named JOB_Recovery. 2. Open the job and declare these local variables: Variable              

Multiuser functionality in SAP Data Services

Image
  In this post you will create two local repositories and one central repository and perform the tasks associated  with sharing objects using Data Services. You can also check the post  Central Repository in BODS  for more details. Multiuser Development This section introduces you to Data Services features that support multiuser development. Data Services enables multiple users to work on the same application. It enables teams of developers working on separate local metadata repositories to store and share their work in a central repository. You can implement optional security features for central repositories. Introduction Data Services can use a central repository as a storage location for objects. The central repository contains all information normally found in a local repository such as definitions for each object in an application. In addition, the central repository retains a history of all its objects. However, the central repository is merely a storag

ABAP Dataflow & Transports in BODS

Image
  See more in our ebook     This  post introduces the SAP BusinessObjects Data Services objects for extracting data from SAP application sources:  SAP Applications datastores  ABAP data flows  transports Note: The procedures in this section require that the software has the ability to connect to an SAP server. The sample tables provided with Data Services that are used in these procedures do not work with all versions of SAP because the structure of standard SAP tables varies in different versions. Defining an SAP Applications datastore 1. In the local object library, click the Datastores tab. 2. Right-click inside the blank space and click New. The "Datastore Editor" dialog box opens. 3. In the Datastore name field, type SAP_DS. This name identifies the database connection inside the software. 4. In the Datastore type list, click SAP Applications to specify the datastore connection path to the database. 5. In the Application server field, type the name

Merge Transform in SAP BODS

Image
  See more in our ebook Merge Transform is used to merge rows from multiple tables. The only condition for the Merge transform to work is that the structure of the source tables should be same. This is exact opposite of what we do in Case Transform . But we cannot provide any condition in Merge. Login to your data services. Create a new Project, call it as 'Merge_Transform'. Create Job, workflow and dataflow. We have created some tables in the Case transform, we will use the same tables to merge the data. Import the target template tables from your case transform: