Cognos Interview Questions,Faqs
1)Can we create reports using OLTP database? How?
2)What is power prompts? Can somebody explain me.
3)How can we convert imr report into Impromptu web report?
4)Can we create a cube in a Transformer from multiple IQD’s ?if so, how? Also , explain some of the best practices used ,while creating a cube in power play transformer?
5)How to schedule the report in Cognos? And i have 2 users u1,u2.
6)How to schedule the report for these 2 users(the 2 users will get result once for a week) but u1 get the result on every Monday to his inbox, and u2 get the result on every Wednesday to his inbox.?
7)How to upgrade drivers in Cognos 8?
8)What is the difference between cognos 8.2 and cognos 8.3?
9)How will you migrate your reports to QA environment?
10)What are the joins present in reportnet? which is most widely used?
11)How security is concerned with Reportnet? Explain its implementing steps.
12)In an existing Framework model, how will you add a new column to the existing table?
13)Report studio has two SQL tabs one native SQL and another one cognos SQL which one is get more preference? which one we need to consider?
14)For example I developed some reports on employee package now I have to develop same reports to another package.
15)How can I change reports to another package?
16)How can we publish package to QA environment not cognos connection?(if we have two URL one cognos connection and qa environment how can we publish package to QA?
17)Fiscal year means what?
18)How you create security to reports ?
19)How you create security to cubes ?
20)What are the applications of the tabular model ?
21)In reportnet how u burst the reports for every 5th day, 10 th day, 15 th day, 20 th day
22)What is the automation tool used for data warehouse testing?
23)What r the names of the reports that u prepared?
24)At what level u prepared the reports in cognos? in dataware housing what is the automation tool u r using?
25)Suppose if u burst the reports if ur reports not reached to the destination how u will identify that one?
26)What is meant by roll up and roll back in dataware housing?
27)What r the migration tools available in the market with respect to cognos such as impromptu
28)Reports are migrated to cognos reportnet?
29)What is logs in cognos ?
30)What is report burn and where it occurs?
Showing posts with label dataware housing. Show all posts
Showing posts with label dataware housing. Show all posts
ETL Dataware housing Interview
What are the various methods of getting incremental records or delta records from the ource systems?
getting incremental records from source systems to target can be done
by using incremental aggregation transformation
by using incremental aggregation transformation
How can we use mapping variables in Informatica? Where do we use them?
After creating a variable, we can use it in any expression in a mapping or a mapplet. Als they can be used in source qualifier filter, user defined joins or extract overrides and in expression editor of reusable transformations.
Their values can change automatically between sessions.
Their values can change automatically between sessions.
Do we need an ETL tool? When do we go for the tools in the market?
ETL Tools are meant to extract, transform and load the data into Data Warehouse for decision making. Before the evolution of ETL Tools, the above mentioned ETL process was done manually by using SQL code created by programmers. This task was tedious and cumbersome in many cases since it involved many resources, complex coding and more work hours. On top of it, maintaining the code placed a great challenge among the programmers.
These difficulties are eliminated by ETL Tools since they are very powerful and they offer many advantages in all stages of ETL process starting from extraction, data cleansing, data profiling, transformation, debuggging and loading into data warehouse when compared to the old method.How do we extract SAP data Using Informatica? What is ABAP?
What are IDOCS?
Go to source analser ,click on source,now u will get option ‘Import from SAP’
click on this now give your SAP access user,client,password and filter criteria as table name(so it will take lessertime).Afetr connecting ,import the sap source.
Now one important thing after finishing the map save it and generate ABAP Code for the map.Then only workflow will be running fine.What are active transformation / Passive transformations?
ACTIVE Transformations:Transformations those can affect the number of records between Input and Output.
Passive Transformations:Transformations those do not affect the number of records between Input and Output.
Passive Transformations:Transformations those do not affect the number of records between Input and Output.
What is Informatica Metadata and where is it stored?
Informatica Metadate is nothing but the source definition, target definition and the transformations that have been built. They are stored in the respository server and is managed by Repository manager of the informatica client tool.
How do we call shell scripts from informatica?
You can use a Command task to call the shell scripts, in the following ways:
1. Standalone Command task. You can use a Command task anywhere in the workflow or worklet to run shell commands.
2. Pre- and post-session shell command. You can call a Command task as the pre- or post-session shell command for a Session task. For more information about specifying pre-session and post-session shell commands
1. Standalone Command task. You can use a Command task anywhere in the workflow or worklet to run shell commands.
2. Pre- and post-session shell command. You can call a Command task as the pre- or post-session shell command for a Session task. For more information about specifying pre-session and post-session shell commands
What is latest version of Power Center / Power Mart?
Latest Ver of Power Center is 7.1 and Informatica stopped marketing powermart product.So right now powermart is not available in market.
Can we override a native sql query within Informatica? Where do we do it? How do we do it?
we can override a sql query in the sql override property of a source qualifier
Can we use procedural logic inside Infromatica? If yes how , if now how can we use external procedural logic in informatica?
We can use External Procedure Transformation to use external procedures. Both COM and Informatica Procedures are supported using External procedure Transformation
Techniques of Error Handling – Ignore , Rejecting bad records to a flat file , loading the records and reviewing them (default values)
Rejection of records either at the database due to constraint key violation or the informatica server when writing data into target table.These rejected records we can find in the badfiles folder where a reject file will be created for a session.we can check why a record has been rejected.And this bad file contains first column a row indicator and second column a column indicator.
These row indicators or of four types
D-valid data,
O-overflowed data,
N-null data,
T- Truncated data,
And depending on these indicators we can changes to load data successfully to target.
These row indicators or of four types
D-valid data,
O-overflowed data,
N-null data,
T- Truncated data,
And depending on these indicators we can changes to load data successfully to target.
ETL Datawarehousing Interview questions 2
What is a staging area? Do we need it? What is the purpose of a staging area?
Staging area is place where you hold temporary tables on data warehouse server. Staging tables are connected to work area or fact tables. We basically need staging area to hold the data , and perform data cleansing and merging , before loading the data into warehouse.
What are the modules in Power Mart?
- PowerMart Designer
2. Server
3. Server Manager
4. Repository
5. Repository Manager - What is a three tier data warehouse?The 3 tiers are:
1. Data tier – bottom tier – consists of the database
2. Application tier – middle tier – consists of the analytical server
3. Presentation tier – tier that interacts with the end-user
What are the various transformation available?The various type of transformation in informatica
source qualifier
aggregate
sequence generator
sorter
router
filter
lookup
update strategy
joiner
normalizer
expressionHow to determine what records to extract?When addressing a table some dimension key must reflect the need for a record to get extracted. Mostly it will be from time dimension (e.g. date >= 1st of current mth) or a transaction flag (e.g. Order Invoiced Stat). Foolproof would be adding an archive flag to record which gets reset when record changes.
Dataware housing ETL Interview question and answers
What are snapshots? What are materialized views & where do we use them? What is a materialized view log?
Materialized view is a view in wich data is also stored in some temp table.i.e if we will go with the View concept in DB in that we only store query and once we call View it extract data from DB.But In materialized View data is stored in some temp tables.
What are the various tools?
The various ETL tools are as follows.
InformaticaDatastage
Business Objects Data Integrator
OLAp tools are as follows.
Cognos
Business Objects
Cognos
Business Objects
Can Informatica load heterogeneous targets from heterogeneous sources?
yes! it loads from heterogeneous sources..
What is the difference between Power Center & Power Mart?
Power Center : we can connect to single and multiple Repositories, generally used in big Enterprices.
Power Mart : we can connect to only a single Repository.
Power Mart : we can connect to only a single Repository.
What is a mapping, session, worklet, workflow, mapplet?
Mapping – represents the flow and transformation of data from source to taraget.
Mapplet – a group of transformations that can be called within a mapping.
Session – a task associated with a mapping to define the connections and other configurations for that mapping.
Workflow – controls the execution of tasks such as commands, emails and sessions.
Worklet – a workflow that can be called within a workflow.
Mapplet – a group of transformations that can be called within a mapping.
Session – a task associated with a mapping to define the connections and other configurations for that mapping.
Workflow – controls the execution of tasks such as commands, emails and sessions.
Worklet – a workflow that can be called within a workflow.
What are parameter files ? Where do we use them?
Parameter file is any text file where u can define a value for the parameter defined in the informatica session, this parameter file can be referenced in the session properties,When the informatica sessions runs the values for the parameter is fetched from the specified file. For eg : $$ABC is defined in the infomatica mapping and the value for this variable is defined in the file called abc.txt as
[foldername_session_name]
ABC=’hello world”
In the session properties u can give in the parameter file name field abc.txt[foldername_session_name]
ABC=’hello world”
What are the different Lookup methods used in Informatica?
connected lookup will receive input from the pipeline and sends output to the pipeline and can return any number of values.it does not contain retun port .
Unconnected lookup can return only one column. it containn return port.
Unconnected lookup can return only one column. it containn return port.
Can we lookup a table from source qualifier transformation. ie. unconnected lookup
You cannot lookup from a source qualifier directly. However, you can override the SQL in the source qualifier to join with the lookup table to perform the lookup.
What is ODS (operation data source)
ODS is the Operational Data Source which is also called transactional data ODS is the source of a warehouse. Data from ODs is staged, transformed and then moved to datawarehouse.
What is the difference between etl tool and olap tools
ETL tool is ment for extraction data from the legecy systems and load into specified data base with some process of cleansing data.ex: Informatica,data stage ….etc
OLAP is ment for Reporting purpose.in OLAP data avaliable in Mulitidimectional model. so that u can write smple query to extract data fro the data base.ex: Businee objects,Cognos….etc
What is Full load & Incremental or Refresh load? |
By Full Load or One-time load we mean that all the data in the Source table(s) should be processed. This contains historical data usually. Once the historical data is loaded we keep on doing incremental loads to process the data that came after one-time load. |
What is partitioning? What are the types of partitioning?
Partitioning is a part of physical data warehouse design that is carried out to improve performance and simplify stored-data management. Partitioning is done to break up a large table into smaller, independently-manageable components because it:
1. reduces work involved with addition of new data.
2. reduces work involved with purging of old data.
Two types of partitioning are:1. reduces work involved with addition of new data.
2. reduces work involved with purging of old data.
1. Horizontal partitioning.
2. Vertical partitioning (reduces efficiency in the context of a data warehouse).
ETL tool Interview questions
What is the metadata extension?
Informatica allows end users and partners to extend the metadata stored in the repository by associating information with individual objects in the repository. For example, when you create a mapping, you can store your contact information with the mapping. You associate information with repository metadata using metadata extensions.
- Vendor-defined. Third-party application vendors create vendor-defined metadata extensions. You can view and change the values of vendor-defined metadata extensions, but you cannot create, delete, or redefine them.
- User-defined. You create user-defined metadata extensions using PowerCenter/PowerMart. You can create, edit, delete, and view user-defined metadata extensions. You can also change the values of user-defined extensions.
What are the various test procedures used to check whether the data is loaded in the backend, performance of the mapping, and quality of the data loaded in INFORMATICA.
The best procedure to take a help of debugger where we monitor each and every process of mappings and how data is loading based on conditions breaks.
Explain the process of extracting data from source systems,storing in ODS and how data modelling is done.
There are various ways of Extracting Data from Source Systems.For example , You can use a DATA step, an Import Process .It depends with your input data styles. What kind of File/database it is residing in. Storing ur data in an ODS can be done thru an ODS stmt/export stmt/FILE stmt, again which depends on the file & data format ,You want your output to be in.
Where do we use connected and un connected loops
If return port only one then we can go for unconnected. More than one return port is not possible with Unconnected. If more than one return port then go for Connected.
Where do we use connected and un connected lookups
If return port only one then we can go for unconnected. More than one return port is not possible with Unconnected. If more than one return port then go for Connected.
Where do we use semi and non additive facts
Additve: A masure can participate arithmatic calulatons using all or any demensions.Ex: Sales profit
Semi additive: A masure can participate arithmatic calulatons using some demensions.
Ex: Sales amount
Non Additve:A masure can’t participate arithmatic calulatons using demensions.
Ex: temparature
What is Entity relation?? How is works with Datawarehousing ETL modeling
:Entity is nothing but an Object, it has characteristics.We call entity in terms of Logical view.The entity is called as a table in terms of Physical view.The Entity relationship is nothing but maintaining a primary key,foreign key relation between the tables for keeping the data and satisfying the Normal form.
There are 4 types of Entity Relationships.
1.One-One, 2.One-Many, 3.Many-One 4.Many-Many
In the Datawarehouse modeling Entity Relationship is nothing but,a Relationship between dimension and facts tables(ie:Primary,foreign key relations between these tables).
The fact table getting data from dimensions tables because it containing primary keys of dimension tables as a foreign keys for getting summarized data for each record.
Dataware housing Interview question and answers
What is Data warehousing?
A data warehouse can be considered as a storage area where interest specific or relevant data is stored irrespective of the source. What actually is required to create a data warehouse can be considered as Dataware housing Data warehousing merges data from multiple sources into an easy and complete form.
What are fact tables and dimension tables?
As mentioned, data in a warehouse comes from the transactions. Fact table in a data warehouse consists of facts and/or measures. The nature of data in a fact table is usually numerical.
On the other hand, dimension table in a data warehouse contains fields used to describe the data in fact tables. A dimension table can provide additional and descriptive information (dimension) of the field of a fact table.
e.g. If I want to know the number of resources used for a task, my fact table will store the actual measure (of resources) while my Dimension table will store the task and resource details.
Hence, the relation between a fact and dimension table is one to many.
What is ETL process in data warehousing?
ETL stands for Extraction, transformation and loading. That means extracting data from different sources such as flat files, databases or XML data, transforming this data depending on the application’s need and loads this data into data warehouse.
Explain the difference between data mining and data warehousing?
Data mining is a method for comparing large amounts of data for the purpose of finding patterns.Data mining is normally used for models and forecasting. Data mining is the process of correlations, patterns by shifting through large data repositories using pattern recognition techniques.
Data warehousing is the central repository for the data of several business systems in an enterprise. Data from various resources extracted and organized in the data warehouse selectively for analysis and accessibility.
What is an OLTP system and OLAP system?
OLTP stands for OnLine Transaction Processing.Application
that supports and manges transactions which involve high volumes of data are supported by OLTP system. OLTP is based on client-server architecture and supports transactions across networks.
OLAP stands for OnLine Analytical Processing. Business data analysis and complex calculations on low volumes of data are performed by OLAP. An insight of data coming from various resources can be gained by a user with the support of OLAP.
What are cubes?
Multi dimensional data is logically represented by Cubes in data warehousing. The dimension and the data are represented by the edge and the body of the cube respectively. OLAP environments view the data in the form of hierarchical cube. A cube typically includes the aggregations that are needed for business intelligence queries.
A data warehouse can be considered as a storage area where interest specific or relevant data is stored irrespective of the source. What actually is required to create a data warehouse can be considered as Dataware housing Data warehousing merges data from multiple sources into an easy and complete form.
What are fact tables and dimension tables?
As mentioned, data in a warehouse comes from the transactions. Fact table in a data warehouse consists of facts and/or measures. The nature of data in a fact table is usually numerical.
On the other hand, dimension table in a data warehouse contains fields used to describe the data in fact tables. A dimension table can provide additional and descriptive information (dimension) of the field of a fact table.
e.g. If I want to know the number of resources used for a task, my fact table will store the actual measure (of resources) while my Dimension table will store the task and resource details.
Hence, the relation between a fact and dimension table is one to many.
What is ETL process in data warehousing?
ETL stands for Extraction, transformation and loading. That means extracting data from different sources such as flat files, databases or XML data, transforming this data depending on the application’s need and loads this data into data warehouse.
Explain the difference between data mining and data warehousing?
Data mining is a method for comparing large amounts of data for the purpose of finding patterns.Data mining is normally used for models and forecasting. Data mining is the process of correlations, patterns by shifting through large data repositories using pattern recognition techniques.
Data warehousing is the central repository for the data of several business systems in an enterprise. Data from various resources extracted and organized in the data warehouse selectively for analysis and accessibility.
What is an OLTP system and OLAP system?
OLTP stands for OnLine Transaction Processing.Application
OLAP stands for OnLine Analytical Processing. Business data analysis and complex calculations on low volumes of data are performed by OLAP. An insight of data coming from various resources can be gained by a user with the support of OLAP.
What are cubes?
Multi dimensional data is logically represented by Cubes in data warehousing. The dimension and the data are represented by the edge and the body of the cube respectively. OLAP environments view the data in the form of hierarchical cube. A cube typically includes the aggregations that are needed for business intelligence queries.
Dataware housing interview question and answers
What is Data warehousing?
Answer
A data warehouse can be considered as a storage area where interest specific or relevant data is stored irrespective of the source. What actually is required to create a data warehouse can be considered as Data Warehousing. Data warehousing merges data from multiple sources into an easy and complete form.
Data warehousing - What is Data warehousing? - May 11, 2009 at 13:40 pm by Vidya Sagar
Answer
Data warehousing is a process of repository of electronic data of an organization. For the purpose of reporting and analysis, data warehousing is used. The essence concept of data warehousing is to provide data flow of architectural model from operational system to decision support environments.
Answer
A data warehouse can be considered as a storage area where interest specific or relevant data is stored irrespective of the source. What actually is required to create a data warehouse can be considered as Data Warehousing. Data warehousing merges data from multiple sources into an easy and complete form.
Data warehousing - What is Data warehousing? - May 11, 2009 at 13:40 pm by Vidya Sagar
Answer
Data warehousing is a process of repository of electronic data of an organization. For the purpose of reporting and analysis, data warehousing is used. The essence concept of data warehousing is to provide data flow of architectural model from operational system to decision support environments.
Subscribe to:
Posts (Atom)