Sunday 29 March 2009

Automated Cognos Report Documentation

If you need a list with all reports (Cognos ReportNet or Cognos8), including the location of the report you can query the content store. It's easy to retrieve a list like:


LocationNameCreatedModifiedTypeContactDescription
GO Data Warehouse/Employee ProfileEmployee Profile2005-09-07 18:06:45.7902007-01-04 20:43:34.617powerplay cubeStewart JamesHuman resources cube showing data for each employee.
GO Data Warehouse/Sales Target by RegionSales Target by Region2005-07-17 20:04:09.8902007-01-04 20:44:12.587powerplay reportStewart JamesReport showing sales target by region using a prompt to provide values for the report.
GO Data Warehouse/Tool TipsTool Tips2005-08-23 19:26:18.1272007-01-04 20:44:30.040reportStewart JamesReport that shows tool tips and baselines in chart.
GO Sales and Retailers/Actual Sales against Target SalesActual Sales against Target Sales2005-05-27 20:46:43.6402006-12-14 15:41:06.260reportMelvin JohnThe report shows a simple list with conditional formatting that drills through to the Sales Representative Contact List report.
GO Sales and Retailers/Actual Sales against Target Sales-burstActual Sales against Target Sales-burst2005-05-27 20:50:06.9102006-12-14 15:41:15.463reportMelvin JohnReport that is set up for bursting to email addresses. The email addresses point to sales representatives in the Sales reps table in the GOSALES database.
GO Sales and Retailers/Banded ReportBanded Report2005-05-27 20:57:01.4802006-12-14 15:41:26.433reportMelvin JohnBanded report that shows Product name Quantity and Revenue with sales opportunities for each Product line category.
GO Sales and Retailers/Business Details Drill ThroughBusiness Details Drill Through2005-05-27 19:12:28.6272006-12-14 15:42:07.150reportMelvin JohnReport that shows product details in charts. This report is a drill-through target report for the GO Business View report. You can also run it as a product-prompted report.
GO Sales and Retailers/Consumer TrendsConsumer Trends2007-01-05 15:35:24.0032007-01-05 16:24:21.317reportMelvin JohnThis complex report shows a list chart bar chart and product images to illustrate revenue by product type.
GO Sales/Conditional DisplayConditional Display2006-05-30 18:42:25.1572007-01-04 20:45:26.793reportDropster TinaThe report uses a prompt page to query a chart and a list report.
GO Sales/Custom LegendCustom Legend2005-05-19 18:57:04.3172007-01-04 20:45:34.840reportDropster TinaThe report shows that the legend can be customized in a similar way to the titles.
GO Sales/Orders ReportOrders Report2005-06-13 12:50:18.6802007-01-04 20:45:52.907report viewDropster TinaThis report shows the default style applied to List Column Title and the List Column Body in a single step.
GO Sales/Product ReportProduct Report2005-05-19 18:56:43.6832007-01-04 20:47:20.690reportDropster TinaThe report displays combination charts with drills through options.
GO Sales/Retailer Report (Multiple Prompt Values)Retailer Report (Multiple Prompt Values)2005-06-16 16:50:21.9372007-01-04 20:56:16.270reportDropster TinaThis list report accepts multiple prompt values.
GO Sales/Returns by Order Method -Prompted ChartReturns by Order Method -Prompted Chart2005-08-05 17:00:31.9072007-01-04 20:56:56.177reportDropster TinaThis report uses a bar chart and a list to show the reasons why products are returned. It uses a prompt to filter on the return description. It is illustrated in the Getting Started tutorial.
GO Sales/Revenue by Sales TerritoryRevenue by Sales Territory2005-06-14 18:19:12.6532007-01-04 21:00:04.703reportDropster TinaThe report shows a chart and a crosstab display with conditional formatting that drills through to the Orders Report report.
GO Sales/Top 5 Sales StaffTop 5 Sales Staff2005-07-22 23:44:55.9072007-01-04 21:00:13.737reportDropster TinaThis list report embeds a bar chart that shows the top five sales representatives by sales targets and revenue.
GO Sales/Top Revenue (Conditional)Top Revenue (Conditional)2005-06-16 19:22:10.1602007-01-04 21:00:27.347reportDropster TinaA list report that conditionally hides the Product Line group footers for the revenue lower than the specified value.



The only hard part is to create the location. You have the implement a user defined function to join the parent cmid (pcmid) to the cmid. A stored procedure fills a table with all the info you need. I wrote a script that works for ReportNet and Cognos8:

First create the UDF and the stored procedure:

CREATE FUNCTION [dbo].[ufn_GetParentPath] ( @pCurrentNodeID INT )
RETURNS VARCHAR(1000)
AS
BEGIN
DECLARE @vCurrentNodeName VARCHAR(50)
DECLARE @vParentID NVARCHAR(100)
IF @pCurrentNodeID IS NULL OR @pCurrentNodeID = 0
RETURN NULL
SELECT @vCurrentNodeName = [name], @vParentID = [pcmid],@pCurrentNodeID =[cmid]
FROM [dbo].[reports]
WHERE [cmid] = @pCurrentNodeID
RETURN ISNULL([dbo].[ufn_GetParentPath] ( @vParentID ) + '/', '') + @vCurrentNodeName
END


CREATE procedure [dbo].[pLoadReportData]
as
begin
if exists (select * from dbo.sysobjects where id = object_id(N'[dbo].[reports]') and OBJECTPROPERTY(id, N'IsTable') = 1)
drop table reports
CREATE TABLE [dbo].[reports](
[cmid] [int] NOT NULL,
[localeid] [smallint] NOT NULL,
[mapdlocaleid] [smallint] NULL,
[isdefault] [bit] NOT NULL,
[name] [nvarchar](256) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL,
[pcmid] [int] NOT NULL,
[version] [int] NOT NULL,
[created] [datetime] NOT NULL,
[modified] [datetime] NOT NULL,
[disabled] [tinyint] NULL,
[class] [nvarchar](50) NOT NULL,
[dispseq] [int] NOT NULL,
[contactemail] [varchar] (128) NULL,
[contact] [varchar] (128) NULL,
[description] [nvarchar] (3772) NULL,
[owner] [int] NULL,
[path] [nvarchar](max) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[spec] XML NULL
)


create index ix_cmid on reports (cmid)


insert into reports
(
cmid
, localeid
, mapdlocaleid
, isdefault
, name
, pcmid
, version
, created
, modified
, disabled
, class
, dispseq
, owner
, path
)
select
n.cmid
, n.localeid
, n.mapdlocaleid
, n.isdefault
, n.name
, o.pcmid
, o.version
, o.created
, o.modified
, o.disabled
, c.name
, o.dispseq
, o.owner
, '' path
from cmobjnames n join cmobjects o
on n.cmid = o.cmid and o.classid in (0,1,10,18,19,33,81,82)
join cmclasses c on o.classid = c.classid and n.isdefault = 1

update reports
set contact = c.contact
, contactemail = c.contactemail
from cmobjprops10 c
join reports on c.cmid = reports.cmid

update reports
set description = d.value
from cmobjprops3 d
join reports on d.cmid = reports.cmid
and d.isdefault = 1

update reports
set path = x.path
from
(SELECT [dbo].[ufn_GetParentPath] ( [cmid] ) AS [Path], [name],[cmid]
FROM [dbo].[reports]) x
where reports.cmid = x.cmid

end

Run (and schedule) the stored procedure.

Create a view to exclude the folder, package, root and content items:


create view [dbo].[v_reports]
as
select
path
, name
, created
, modified
, class
, description
, contact
, contactemail
from reports
where class in ('report','reportview','powerplaycube','powerplay report')

Query

'select * from v_reports order by path'

will do the final part of the job.

I implemented this for a customer with >37000 (!) reports, report views, cubes and cube reports. In this case, the script runs in about 10-15 seconds.

Thursday 19 March 2009

Date range filter in Cognos8 ReportStudio based on dynamic date/fixed time.

Today I needed to report data in Cognos8.2 Report Studio with a date range filter, based on the current day and yesterday (dynamic) and a designated (fixed) time.

Values had to be extracted with a DateTime between yesterday 9 AM and today 9 AM. As we use an Oracle database it should be easy like:

[DateTime] >= TO_TIMESTAMP (to_char({sysdate}-1) || ' 09:00:00.000000', 'DD-Mon-RR HH24:MI:SS.FF')
AND
[DateTime] < TO_TIMESTAMP (to_char({sysdate}) || ' 09:00:00.000000', 'DD-Mon-RR HH24:MI:SS.FF')

However, on this particular reporting environment, we are not permitted to use native functions. Hence, I wrote the Cognos8 syntax (in the Detail Filter expression):


[DateTime]

-

_make_timestamp
(
extract(year,(_add_days(current_date,-1) ) )
, extract(month, (_add_days(current_date,-1) ) )
, extract(day, (_add_days(current_date,-1) ) )

)

>=9 hour

AND

[DateTime]

-

_make_timestamp
(
extract(year,current_date )
, extract(month,current_date )
, extract(day,current_date )

)

<9 hour

Tuesday 10 March 2009

The common process of a BI - request

This post is the third article with title

'Requirements and procedures for the successful deployment and maintenance of a Data Warehouse and Business Intelligence environment'
The articles describe the requirements for developing, deploying and maintaining a Data Warehouse environment including Business Intelligence solutions within the contexts:
Change/ Problem Management (the process of managing changes and problems, in other words, how does a customer and/or a user submit a change/problem request, how is it prioritized, tracked and implemented?).
Configuration management (the versioning, labeling and tracking of code and content components during the development life cycle)
Release Management (it designates and schedules software and hardware release components to migrate it to the live environment as a set).


We distinguish different types of Business Intelligence requests:

Incidents (IN)
General Questions (GQ)
Problem requests (PR)
(Ad hoc) reports (AQ)
Change requests (CR)
Data Manipulations (DM)
Emergency requests (ER)

The requests have origin in the user. They, initially, always refer to the user layer (reports \ dashboard\ reporting tool) and not to the architecture that creates the product for the customer. A request can also arise in the Data Warehouse team but it can always be allocated to one of the above request types and must always be treated as if they come from a user.

If the problem/change management tool is not capable of storing sources, attachments of requesters must be filed in a content management store so developers can access it.

Non CR’s are raised as an issue/ticket in a problem/change management tool with a certain priority. Change requests can be raised as a project or as an issue (depending on the size of effort). Incidents, problem requests, ad hoc queries and general questions can be raised as a single ticket or can be assigned to a project (related to a CR). SLA stuff. High tickets not only need to be entered in the tool, but has to be communicated directly to the assignee to guarantee that it will be picked up immediately.

User support must be equipped with an intake form to register the call as complete as possible to avoid misunderstandings. User support must be trained to distinguish the different types of requests.

INCIDENTS

Incidents refer to an offline or error mode of the user layer. These are technical questions, not related to data.

Offline mode
In this case the product is not present:
The (web) portal is not accesible
The report is not refreshed
The report is not visible (on the web or on the network)

Error mode
In this case the product crashes:
The user can not log in (security problem)
A canned (olap) report can not be opened
A non-canned report won’t run or is killed
An error message appears, but the result seems to be fine.
An error appears when a power user creates a new report.

User Support reproduces the incident (with or without terminal services) and creates a solution if a user performed incorrect actions (or in case the user forget his authorization code). User support must possess a user configuration and power user configuration to be able of reproducing the incident. If the user acted correctly an high priority ticket must be raised. In case of error 4 a low priority and in case of error 5 a medium priority is sufficient. If the problem concerns all users (e.g. offline mode of the webportal) communication must take place to the users. In other cases user support should report the incident to the ower of the report (key user). In case of error type 5: In this case User Support is the problem owner if it concerns an error related to the tool itself, its configuration, or if it’s a database connection problem. Development issues of the power users should only be managed if the meta-data repository (i.e. Cognos Catalog, Business Objects Univers) is maintained by the data warehouse team and the user cannot create new definitions. If not, the reports can not be supported and maintained.

All users (and members of the data warehouse team) are allowed to report an incident.

incident

In most cases the tool administrator is concerned with it. In case of a non-refresh of a cube/report it’s possible that other processes than the front-end tools are evolved (network problems, database problems). In that case the tool administrator needs the help of the system administrator/database administrator. To maintain the knowledge base the problem and solution must be documented.

GENERAL QUESTIONS

General questions concern:

- (power) user training
- request for authorization
- end-user-tool functionality explanation
- report functionality explanation
- definition explanation (i.e. ‘How is revenue calculated’, ‘What’s a work breakdown structure’?)
User Support should answer all these questions. User Support should have documentation to deal with this kind of requests. If it concerns definition explanation, they can refer to the key user who should be equipped with the data warehouse definition handbook. If such a handbook does not exist or the definition is not entered in the book User Support can raise a ticket and assign it as low priority ticket to administration. Training is planned by the team lead of User Support (or in case of no competence, the team lead of administration).

Only Key users may raise general questions.

general question

To maintain the Frequently Asked Questions (FAQ) the question and answer must be documented.

PROBLEM REQUESTS

A problem request arises when – according to the user - a report does not correspond with the original specifications. Other data is reported then expected. If the data warehouse team is organized in the way that sources in production are maintained by administration a ticket will be assigned to the administration manager. If development is also working on production sources the ticket will be assigned to the development manager or directly to a certain developer. If the data warehouse team is implemented with substantial functional en technical documents with configuration management and release management procedures it’s possible to hand over development knowledge to an administration team. If not, production development should stay at the original developers (although, in a practical situation developers leave, roles don’t). The administration team can also report a problem request (i.e. reconciliation outcomes). The owner or the report (key user) needs to be reported.

problem request1

To equip the assignee with all information and for maintain purposes the problem information (i.e. attachments of error messages) must be saved in a content management system (and if possible in the problem/change management tool).

In case of a problem report or a change request the developer

- contacts customer to make an inventory of the specifications;
- writes a functional design;
- makes a planning and gives feedback to requester;
- delivers functional design to requester;
- chases requester to approve the functional design (redesign can be result);
- adapts the design in case of disapproval;
- executes feasibility study (if not attainable requester should get well founded reply).
- writes the technical design in case of approval;
- a colleague judges the technical design (redesign can be a result)
- checks out all necessary sources.
- adds the new sources to the source id document;
- develops on the client (with connection to the development database\dwh)
- creates test specifications
- checks in all sources;
- writes the deliver-to-administration document so administration can publish all sources;
- promotes the sources to test landscape so testers can perform technical tests.
- writes an migration request and acceptance test criterions.
- request administration to promote (after approval) the source to acceptance and asks test management for an acceptance test request
- creates/maintains technical documentation;
- hands over input to User Support create/maintain user documentation/training material;
- request administration to promote (after approval) the source(s) to production and communicate to requester that problem is solved.

problem request 2

AD HOC REPORTS

Sometimes users wish to have once-only information, which is not provided by the existing reports. We distinguish two kinds of ad hoc reports:

User needs real-time, detailed information.
User needs detailed/aggregated information (currency equals frequency of the data warehouse refresh or less)

Ad 1. In this case a ticket must be assigned to a (sql-) developer in the development team. If a report (which meets the need of the requester) already exists in the application of the source, User Support must refer to this. The (sql)- developer must be able to connect to the original source because a data warehouse cannot provide real-time information. If User Support / development finds out that the query is similar to an existing request (i.e. with an another filter) User Support can advice the user to let it published as a structural report. Then a change request has to be raised.
Generally, these kinds of ad hoc reports have a high priority character because it’s a real-time need.
The developer saves the query, the result of the query and the metadata so User Support has access to it and can reuse it.

Ad 2. If the report already exists User Support must refer to this. If not, User Support can analyse whether the report can be deployed on an existing OLAP-cube. If so, User Support creates a ticket and assigns it to themselves. User Support must be aware that it should be a task of the user (as much as possible). If the report cannot be deployed on an existing OLAP cube and User Support assigns a ticket to a data access developer. User must confirm that the request is a once-only request. Otherwise, a change request has to be raised.

ad hoc report 1

In case of assignment to development a data access developer builds a query on the source system (in case of a real time need) or on the DWH (not real time).

The developer

- contacts customer to make an inventory of the specifications;
- adds the new sources to the source id document;
- develops the query on development;
- asks requester to evaluate the draft;
- runs the query on production;
- saves result and query in content management system;
- maintains definition repository.

ad hoc report 2

A migration to production process through change management won’t be used because of the high priority character of an ad hoc report request. In fact, nothing will be published but the result will be send to the requester (i.e. in xls format). If a report on a cube is published, the cube model itself has not changed. Users themselves can create reports on cubes and publish them without using the acceptance environment.

CHANGE REQUESTS

Change requests cannot be raised by all users. This is only admitted to the owner of a (set of) reports (a key user) or the lead of the User Group if the request concerns more key users.
A change request is a request for new information that is not provided by the existing report/cubes. If a change requests has a certain cost (i.e. above € 10.000) it will be classed and managed as a project. If less it can be raised as an individual ticket, assigned to development.

change request 1

In case of a problem report or a change request the developer

- contacts customer to make an inventory of the specifications;
- writes a functional design;
- makes a planning and gives feedback to requester;
- delivers functional design to requester;
- chases requester to approve the functional design (redesign can be result);
- adapts the design in case of disapproval;
- writes the technical design in case of approval;
- a colleague judges the technical design (redesign can be a result)
- checks out all necessary sources.
- adds the new sources to the source id document;
- develops on the client (with connection to the development database\dwh)
- creates test specifications
- checks in all sources;
- writes the deliver-to-administration document so administration can publish all sources;
- promotes the sources to test landscape so testers can perform technical tests.
- writes an migration request and acceptance test criterions.
- request administration to promote (after approval) the source to acceptance and asks test management for an acceptance test request
- creates/maintains technical documentation;
- hands over input to User Support create/maintain user documentation/training material;
- request administration to promote (after approval) the source(s) to production and communicate to requester that problem is solved.

change request 2

DATA MANIPULATIONS

A Data Manipulation is a change of data in the Data Warehouse. Data Manipulations request cannot be raised by all users. This is only admitted to the owner of a (set of) reports (a key user) or the lead of the User Group if the request concerns more key users. In most cases definitions are dynamical. In some cases a definition is determined by hard-coded data. For instance, the rule to determine whether a cost object is billable or not, cannot be automated. Regularly the cost objects must be updated in a specific table. Another example is the definition of the bridging day: there is no formula to calculate this, every year the policy can change. If possibly, the requester must have the means to do it himself (i.e. ASP).
Changes must be maintained in the Data Warehouse Definition Handbook.

Data Manipulation

EMERGENCY REQUESTS

In case of emergency it can be needed to publish a change as soon as possible in the production environment. In this case the data warehouse team manager has to approve. This kind of requests has origin in the development team. It’s an accelerated migration to production procedure. If necessary, technical documentation can be written afterwards. To check whether everything went ok a (short) evaluation with the DWH manager is needed.

emergency request

emergency request 2

Furthermore an emergency request can arise if a user wants the result of an ad hoc report immediately. The team lead for data access has to approve.

emergency request 3

Required landscape in a DWH environment

This post is the second article with title

'Requirements and procedures for the successful deployment and maintenance of a Data Warehouse and Business Intelligence environment'
The articles describe the requirements for developing, deploying and maintaining a Data Warehouse environment including Business Intelligence solutions within the contexts:
Change/ Problem Management (the process of managing changes and problems, in other words, how does a customer and/or a user submit a change/problem request, how is it prioritized, tracked and implemented?).
Configuration management (the versioning, labeling and tracking of code and content components during the development life cycle)
Release Management (it designates and schedules software and hardware release components to migrate it to the live environment as a set).



One of the most important aspects of building Business Intelligence solutions is the requirement to build and support multiple environments, which allow developers and testers to:

• Make changes to the (etl)-scripts/reports
• Add new subject areas
• Add or update dimensions
• Verify and test the accuracy of the data
• Experiment with joining/conforming subject areas

without effecting each other and/ or the production reports/data.

At least four environments are required:

• A development environment where data analysts, ETL specialists, and report writers could add new or enhance existing components of the warehouse and test their changes.
This can be used a system test environment. In needed a separate system test (T) environment can be set up.
• An acceptance test environment which would be used to perform the final verification of data and reports prior to deploying into production.
• A production mirror environment. This environment mirrors production reports, web pages and data. This is used as fallback.
• Production environment

To maintain stability, development has only access to the development/test environment and administration (and no one else) to acceptance and production. In case of an emergency request or an ad hoc report this can differ.

Depending on the change frequency, the refresh of the acceptance environment will have the same frequency as the production environment or will have a refresh every time a change is published.

For training purposes a separate environment (R) can be set up. Usually, acceptance can not be used for this purposed (unless the performance is ok).

Below is a graphical presentation of an advisable environment (fallback machines are not presented, every server is intended to have fallback-mechanisms).
Depending on the intensity of use a less server configuration is sufficient. A single server configuration can only be advised for a development environment.

landscape

About tooling: To minimize administration problems a single-product-configuration is advised. Nowadays Business Intelligence products cover the whole range of Data Warehouse requirements. Cognos has its own ETL-tool, Informatica delivers high-end reporting products, Oracle and Microsoft are doing well in Business Intelligence and SAP BI brings together a powerful business intelligence platform and data warehousing capabilities.

Required roles and responsibilities in a DWH environment

This post is the first article within title

'Requirements and procedures for the successful deployment and maintenance of a Data Warehouse and Business Intelligence environment'
The articles describe the requirements for developing, deploying and maintaining a Data Warehouse environment including Business Intelligence solutions within the contexts:
Change/ Problem Management (the process of managing changes and problems, in other words, how does a customer and/or a user submit a change/problem request, how is it prioritized, tracked and implemented?).
Configuration management (the versioning, labeling and tracking of code and content components during the development life cycle)
Release Management (it designates and schedules software and hardware release components to migrate it to the live environment as a set).


The following users and roles are in general related to Business Intelligence activities (some roles may be combined, one person may fill more than one role or a role may be filled by more than one person).

BUSINESS

Key User
Key-Users are the owners of a (subset of) reports. More important, they can be the owners of the definitions of specific performance indicators. They have privileges to report change requests and problem requests with respect to their own reports and cubes. They are not allowed to request changes of conformed dimensions and measures (dimensions and measures which are used by more key-users).

User Group
Members of the User Group are key users who share conformed dimensions and measures. They authorize the head of the User Group to request for changes. Usually the User Group is in conference to discuss changes (and practical value of the management information produced by the Data Warehouse team). A representative of the Data Warehouse Team can participate.

User
Users do not have the privilege to request for changes. They may report problem requests.
Users can have different roles:

Recipient
Members can only retrieve content (for example by email)

Consumer
Members can read (and execute) (public) content such as Powerplay/Impromptu reports.

Query User
Members can execute queries on the Data Warehouse (with TOAD, Cognos Impromptu, SQL+)

Power User
Members have the same access permissions as Query Users. They can use Impromptu or PowerPlay to create and publish reports for public or personal use.

Control Advisor
A control advisor judges whether it’s allowed to deliver information/data (i.e. privacy aspects). The data privacy must be kept in mind. Very often there are legal restrictions around the correct use of (personal) data. The head of the User Group can perform this role and delegate it to i.e. HRM (e.g. privacy-data) or Sales (e.g. customer-data).

Business Analyst
The role of the business analyst is to perform research and possess knowledge of existing business applications and processes to assist in identification of potential data sources, business rules being applied to data as it is captured by and moved through the transaction processing applications, etc. Whenever possible, this role should be filled by someone who has extensive prior experience with a broad range of the organization’s business applications. Another alternative is using business analysts from the various application areas as sources when those areas are added to the data warehouse. These analysts may only be associated with the DW team for relatively short periods when the applications they are responsible for are addressed in the data warehouse project. In many cases, the business analyst will work with end users and management to develop business process models that may be used in designing the data warehouse. A business analyst can be a key user.
Business Advisor
The Business Advisor is a business analyst who exceeds the separate processes. He owns shared, conformed dimensions and measures.

SOURCE SYSTEM

Functional Application Administration (FAA)

Members of Functional Application Administration support key-users. They are responsible for migration of changes of the source system (not the Data Warehouse environment). Functional administrators have functional knowledge of the source system. They are capable of assist the Data Warehouse team. They can map business questions to database tables/columns.

Technical Application Administration (TAA)

Technical administrators implement changes and solve problems. They can assist the Data Warehouse Team in creating sql-scripts. They can map the more complex business questions.

Database administrators (DBA)
A database administrator maintains the database of the source system, with the schema’s, tables and so on. It’s important that the Data Warehouse Team cooperates with DBA. Database maintenance is critical for the organization. A breakdown of failure in database traffic can have huge consequences.

DATA WAREHOUSE TEAM

Data Warehouse Manager
The data warehouse manager has overall responsibility. The manager defines, plans, schedules, and controls. The DWH-plan must include tasks, deliverables and resources – the people who will perform the tasks. The manager will monitor and coordinate the activities of the team, and will review their deliverables. If contractors and consultants are used, the project manager assigns the tasks, monitors activities and deliverables and assures that knowledge transfer is indeed taking place.
The manager will estimate the cost of the project and monitor conformance to the cost estimates. The manager will also project the benefits, measure the effectiveness of the data warehouse and report on the benefits and costs.
The most important task for the manager is recruiting the right people; people with the requisite skills and people who work well with the users and each other. Getting the right people will require an understanding from management on the importance of the project and their cooperation in the recruiting process. The problem is that the good people are already taken and a smart manager outside of the data warehouse area is loath to give up an outstanding performer. Hiring people from the outside has its own risks and time delays. An outside hire will need some time to become familiar with the organization, to learn how things are done, to understand the minefields, and to learn about the source data.

User Support
User support is the first line of defense when the user has problems. These people must be customer/client-oriented and know that the calls from the users are not just annoyances but their reason for being. They must be knowledgeable about the tools, must understand the database structure, must know the data and must be familiar with frequently executed queries and reports. They must know the users’ concerns and problems and know the frequently asked questions and the answers to them. They must have a profile of the users, know the power users and the more casual users. They must be patient and responsive. If time elapses between hearing about and resolving the problem, the users must be informed on how the resolution is progressing. Just as in operational systems, an incident tracking system should be in place.
User support will create and maintain frequently asked questions (FAQs) that could be accessible through the Internet. User support should make extensive use of meta data to train, inform and answer user questions. A large percentage of questions that come to help desks involve problems of the users not understanding the meaning of the data, the timeliness of the data or the source of the data. Most of these questions can be answered with meta data.

Intake assistant
Raised requests are entered in a problem/change management system. The intake assistant judges the correctness of the reported type of request, assigns the issue to the relevant person and gives the requester feedback about the planning. If possible, the assistant creates a solution himself.
The duty includes the setup of user IDs and system access roles for each person or group, which is given access to the data warehouse or data mart.

Report builder
Sometimes User Support creates some of the canned queries and reports. It should be a task of the (power) user but organizations can experience a passive attitude in this in spite of transfer of knowledge. This is not unusual; reporting is not the core activity of users.
User Support shouldn’t run queries with a structural character. In practice, it can happen that a change request to publish the query as a standard report has not finished. In the meantime User Support can run the query and distribute the result.

Job Monitor
Some of the problems that are seen by user support relate to the update/load (refresh) process, such as whether jobs are completed on time and whether they ran successfully. These processes must be monitored, and any variance in availability or timeliness of the data should be communicated to the users. Other problems relate to performance. User support must have a feel for performance, monitor performance and report performance degradation to database administration.

Trainer
The responsibilities can include training end users to use data query/analysis/ reporting tools, assisting end users in development of queries and reports, assisting in development of production queries/reports and assisting with evaluation and selection of end-user data access tools. This role is more likely to be placed within the IT data warehouse team. The trainer(s) can then work with end users in many departments which might be utilizing the query/reporting/ analysis tools selected for use by the company’s personnel.

Data Access Analyst
This role performs a variety of tasks, which help provide end users with the ability to access and utilize the data stored in the data warehouse or its dependent data marts. This includes evaluating and selecting business intelligence, OLAP and other query/reporting tools, identifying and prioritizing data access projects, analyzing data access requests to determine if and how the request can be met, educating end users on data warehouse capabilities and data availability, establishing data access standards, etc. This role can sometimes be filled by a power user, but most often will be filled by a member of User Support familiar with the application data who has been involved with the data warehouse design and definition process.

DATA ACQUISITION

Data Modeler
The person(s) in this role prepares data models for the source systems based on information provided by the business and/or data analysts. Additionally, the data modeler may assist with the development of the data model(s) for the DWH or a data mart guided by the data warehouse architect.

Data Warehouse Adviser
This role will require some level of familiarity with the source systems, the extract processes placed within the transaction processing systems, the content and design of the data warehouse and the load processes and business rules used for data integration and summarization, and the use of the data warehouse by end users or data mart build processes which may have been developed in conjunction with the data warehouse. This person or persons may also be required to have a fundamental knowledge of the data access tools and work with end users to design and develop data access functions for use within the departments or by the company as a whole. This role if fulfilled if a senior consultant Business Intelligence / Data Warehousing is needed for advice.

Data Warehouse Architect
These job responsibilities encompass definition of overall data warehouse architectures and standards, definition of data models for the data warehouse and all data marts, evaluation and selection of infrastructure components including hardware, DBMS, networking facilities, ETL (extract, transform and load) software, performing applications design and related tasks.

Data Warehouse Developer
The person(s) in this role develops various functional components of the data warehouse applications including extract programs on source systems, ETL applications, data cleansing functions, system management functions including load automation, data acquisition functions and others. The transformation, data cleansing and load functions will usually be developed by team members who are trained in the use of specific ETL and data cleansing tools.
The back-end ETL application developer is responsible for the acquisition process that constitutes the major effort for any data warehouse – the extract/transform/load process. It has been estimated that these tasks are 60 to 80 percent of the total time and effort to implement a data warehouse. Extract/transform/load will be performed either with an ETL tool, with conventional programming or with a combination of both. The back-end ETL application developers will be responsible for the process regardless of which approach is used. Do not assume a tool will eliminate the work or effort involved with this process. The required analysis is still time-consuming and describing the complicated transformation logic to the ETL tool can be difficult. The back-end ETL application developers are responsible for tie-outs, which are the controls of numbers of records extracted matched to the number of records loaded, controls on totals, on errors and on the types of errors. Data cleansing, whether it is done with a cleansing tool, with an ETL product or with conventional coding, is also the responsibility of the back-end ETL application developers. Neither the data warehouse nor the back-end ETL application developers are responsible for fixing the operational systems that feed the data warehouse.
The ETL process is always time- and resource-consuming, even with a data warehouse of moderate size. Application development must be aware of the performance implications of their architecture, design and coding.

DATA ACCESS

The second part of the DW project team is a group whose purpose is to get data, or rather, information out of the data warehouse or a data mart.

Data Access Developer
The person in this position develops data access queries/programs using BI tools such as Cognos Impromptu, PowerPlay and Cognos Query to run in production on a regular basis and assists users with development of complex ad hoc queries and analyses. This role may be a power user, trained in the use of the query/reporting/analysis tool or a member of User Support. The front-end application developers play a critical role in delivering the data from the data warehouse. The front-end application developers are usually heavily involved with the users. One of the selling points of the data warehouse is to empower users to write their own queries. If you do not use Data Access Developers you have to write incredibly complex queries – often to the detriment of performance. In case of a clean and correct Data Warehouse with documented logical and technical metadata a Power User can be able to write their own queries and publish reports. However, there are many users who are not capable of writing all the queries they need. In these cases, the front-end application developers will be responsible to write the queries for them. Developers must be aware of the importance of writing functional and technical documents.

ADMINISTRATION AND MAINTENANCE

Data Warehouse Infrastructure Manager
If a project is large enough to require dedicated resources for system administration and database administrators (DBAs), it is possible you will want a person who will provide leadership and direction for these efforts. This would be someone who is familiar with the hardware and software likely to be used, experienced in administration of these areas and who can direct tuning and optimization efforts as warehouse development and use moves forward in the organization. Including the infrastructure team within the large data warehousing group helps ensure that the needed resources are available as needed to ensure that the project stays on track and within budgeT.

System Administrator
This position is responsible for maintaining hardware reliability, system level security, system level performance monitoring and tuning, and automation of production activities including extract and load functions, repetitively produced queries/reports, etc. In many cases, the system administrator is responsible for ensuring that appropriate disaster recovery functions such as system level backups are performed correctly and on an accepted schedule. This person is also responsible for all data acquisition and –access software updates and migrations. Some of the problems relate to the update/load (refresh) process, such as whether it completed on time and whether it ran successfully. These processes must be monitored, and any variance in availability or timeliness of the data should be communicated to the User Group.
The architecture consists of Development/Test, Acceptance and Production.

Database Administrator
The person in this role is involved in database design, especially the physical design used to implement the data warehouse or data marts. The DBA monitors and tunes DBMS performance, administers end-user access and security at the DBMS level, ensures that DW data is appropriately protected via database backup and related strategies, assists with load process automation, and evaluates and selects infrastructure components. The DBA should be proficient in the technology of both the DBMS and the operating system chosen for the data warehouse or data mart. Other problems relate to performance. The Database Administrator has a feel for performance, monitor performance, report performance degradation to database administration, spot poor query techniques that could cause bad performance and help the users write more efficient queries and reports.

Data Administrator
The primary roles of data administration are key to a successful data warehouse.
This role is responsible for identifying, documenting and administering the corporate data asset. This includes working with end users and IT staff to gain consensus on standard definition of common data attributes, developing and maintaining the corporate data model, identifying and documenting data sources and maintaining the meta data repository. The Data Administrator constantly monitors the accuracy and completeness of the metadata.
The data administrator may not be a reporting member of the data warehousing organization but is an integral part of the data identification and analysis needed to build a data warehouse or mart.
Data administrators model the business data according to the business rules and policies governing the data relationships. They also document and maintain the mapping between the logical data model and the physical database design and between the source data and the data warehouse target data. Data administrators understand the source files and know which are the appropriate source files for extraction. Data administration establishes and maintains the naming standards. They communicate the availability of data and, when appropriate, suggest that departments share their data. Data administration is responsible for administering the tool used in data modeling and administering the tool libraries although in some organizations, this is the responsibility of the application development team.
Data administration is responsible for administering the meta data repository. Meta data will come from a number of sources: the modeling tools, the extract/transform/load tool, the query tools, from reengineering the source file data definitions and direct entry into a meta data repository tool. The meta data repository could potentially be updated as well as accessed from the each of these source systems. Tight controls are necessary to minimize
corruption of the meta data. Data administration is responsible for controlling entry, updates and deletes.
A Data Administrator administers the libraries that store scripts, queries and reports. Administration includes developing and implementing the standards for admission to the library and for maintaining the scripts. Version Control software must be used for the sources and documents to prevent the existence of different sources with the same name.

Test Manager
Changes like new queries/reports, DWH-build procedures and ETL-implementations must be well documented and must be thoroughly tested. A functional acceptance-test will be done by the Business Analyst (coordinated by the Test Manager). Team members with (PL)/SQL-knowledge do the technical tests.
Data quality is everyone’s job but it is the primary focus of the Test Manager. Although information about data quality problems often originate in IT, it most often comes from the users. Reconciliation must be a regular part of the job. The responsibilities includes monitoring and reporting on data quality, and searching out causes of incompatibility between the various applications which collect and utilize the company’s data. The Test Manager is responsible for finding and reporting problems relating to data quality, for tracking these problems and assuring that the resolution is assigned to a responsible party. Some of the discovered problems must be reported to data administration where the data exceptions can be properly incorporated into the logical data model. The Test Manager can be involved in writing the programming specifications for the transformation logic that needs to occur during the ETL process.
Not all problems can be resolved nor should they be. It might cost more to fix the problem than it’s worth. The Test Manager along with other interested parties, including the business unit, should determine the criticality of the problem. Analytical requirements could determine the quality requirements. The cost to fix the problem should be estimated and the priority would then be assigned.
The Test Manager analyst should be proactive in trying to find problems before they surface. Data quality analysis tools or simple queries could identify many of the problems.
The most important role for the Test Manager will be evaluating and improving the processes in the data warehouse, specifically the ETL process. By improving the processes, the quality of data is likely to improve. The Test Manager might also seek out cleaner sources of data to replace those that have proven troublesome.

Publish Administrator
The publish administrator publishes all developed sources to the acceptance and –production platform. This regards all DWH-related sources like DWH-build scripts, ETL-scripts and reporting sources. The Publish Administrator has the responsibility to establish an environment where result sets can be distributed (on the Web), where reports can be made available (on the Web) and where queries can be launched (from the Web).
This is not been done by the developers themselves because the changes from development to acceptance and production must be judged on test results and functional/technical documentation. This prevents production misbehavior and knowledge gaps. If a source must be rebuild, documentation has to be sufficient to make that done. Besides that, the data administrator must be guaranteed that definitions have not changed and no new definitions are created without approval of the business. The Data Warehouse
Infrastructure Manager monitors the chosen solution: is everything compatible with the Data Warehouse principles (for example minor transformations in the front-end).

Tool Administrator
The tool administrator has responsibility for dealing with the tool vendor, assuring excellent support and getting answers to questions. The tool administrator deals with problems associated with the tool including queries that produce incorrect results and queries that perform badly. The tool administrator makes sure the tool environment is properly protected. He has responsibility for new releases and migrations.
In most cases this responsibility is allocated to the involved developers/administrators (Query and OLAP tools assigned to the Data Access Developer; ETL tools to the Data Warehouse Developer and so on).

Administration versus Development
If the data warehouse team is implemented with substantial functional en technical documents with configuration management and release management procedures it’s possible to hand over development knowledge to an administration team. In that case it’s possible that administration is concerned with problem requests and development with change requests. Then, data acquisition and data access roles must also be introduced to Administration. If not, production development, including problem requests, should stay at the original developers.

CHANGE MANAGEMENT
Representatives in a ‘change meeting’ will judge the prerequisites for a migration to acceptance and production. Administration, Development and the head of User Group take place. Changes will be judged on the presence and completeness of the required documentation and the correctness of the followed procedures.
Below you find a graphical presentation of the different roles. Again, some roles may be combined, one person may fill more than one role or more than one person may fill a role.

landscape2

Friday 6 March 2009

SQL Server 2008: Finally the ISO8601 week function

The US system has weeks from Sunday through Saturday, and partial weeks at the beginning and the end of the year. An advantage is that no separate year numbering like the ISO year is needed, while correspondence of lexicographical order and chronological order is preserved.

However, in Europe, we use ISO8601, where the week starts on Monday.This system is used (mainly) in government and business for fiscal periods.

There are mutually equivalent descriptions of week 01:

-the week with the year's first Thursday in it (the formal ISO definition),
-the week with 4 January in it,
-the first week with the majority (four or more) of its days in the starting year, and
-the week starting with the Monday in the period 29 December - 4 January.

If 1 January is on a Monday, Tuesday, Wednesday or Thursday, it is in week 01. If 1 January is on a Friday, Saturday or Sunday, it is in week 52 or 53 of the previous year. 28 December is always in the last week of its year. The week number can be described by counting the Thursdays: week 12 contains the 12th Thursday of the year.
The ISO week-numbering year starts at the first day (Monday) of week 01 and ends at the Sunday before the new ISO year (hence without overlap or gap). It consists of 52 or 53 full weeks. The ISO week-numbering year number deviates from the number of the calendar year (Gregorian year) on a Friday, Saturday, and Sunday, or a Saturday and Sunday, or just a Sunday, at the start of the calendar year (which are at the end
of the previous ISO week-numbering year) and a Monday, Tuesday and Wednesday, or a Monday and Tuesday, or just a Monday, at the end of the calendar year (which are in week 01 of the next ISO week-numbering year).

For Thursdays, the ISO week-numbering year number is always equal to the calendar year number.

In SQL Server SELECT DATEPART(wk,'2010-01-01') results in weeknummer 1 while it should be week 53.

Untill version 2008 ISO8601 is not supported.

In MS SQL Server 2008 you can use

SELECT DATEPART(isowk,'2010-01-01') which results in week 53.

There are mutually equivalent descriptions of week 01:

In older versions (SQL 2005, SQL 200) you have to create a function to use ISOweeks:

CREATE FUNCTION [dbo].[ISOweek] (@DATE datetime)
RETURNS int
AS
BEGIN
DECLARE @ISOweek int
SET @ISOweek= DATEPART(wk,@DATE)+1
-DATEPART(wk,CAST(DATEPART(yy,@DATE) as CHAR(4))+'0104')
--Special cases: Jan 1-3 may belong to the previous year
IF (@ISOweek=0)
SET @ISOweek=dbo.ISOweek(CAST(DATEPART(yy,@DATE)-1
AS CHAR(4))+'12'+ CAST(24+DATEPART(DAY,@DATE) AS CHAR(2)))+1
--Special case: Dec 29-31 may belong to the next year
IF ((DATEPART(mm,@DATE)=12) AND
((DATEPART(dd,@DATE)-DATEPART(dw,@DATE))>= 28))
SET @ISOweek=1
RETURN(@ISOweek)
END


This is an example from SQL Book Online by Microsoft.

However, the function relies on the first day of the week being a monday.

Before testing you should set the first day on monday by using:

SET DATEFIRST 1

After that you can check it by:

select
Date,
dbo.ISOWEEK(a.Date) ISOWEEK
from
(
select Date = getdate() union
select Date = convert(datetime,'2010/01/01')
) a


The function below also works independent of the DATEFIRST setting:

create function dbo.ISOWEEK2
(
@Date datetime
)
returns int
as

begin

declare @WeekOfYear int

select
-- Compute week of year as (days since start of year/7)+1
-- Division by 7 gives whole weeks since start of year.
-- Adding 1 starts week number at 1, instead of zero.
@WeekOfYear =
(datediff(dd,
-- Case finds start of year
case
when NextYrStart <= @date
then NextYrStart
when CurrYrStart <= @date
then CurrYrStart
else PriorYrStart
end,@date)/7)+1
from
(
select
-- First day of first week of prior year
PriorYrStart =
dateadd(dd,(datediff(dd,-53690,dateadd(yy,-1,aa.Jan4))/7)*7,-53690),
-- First day of first week of current year
CurrYrStart =
dateadd(dd,(datediff(dd,-53690,aa.Jan4)/7)*7,-53690),
-- First day of first week of next year
NextYrStart =
dateadd(dd,(datediff(dd,-53690,dateadd(yy,1,aa.Jan4))/7)*7,-53690)
from
(
select
--Find Jan 4 for the year of the input date
Jan4 =
dateadd(dd,3,dateadd(yy,datediff(yy,0,@date),0))
) aa
) a

return @WeekOfYear

end
go


You can check it by:

SET DATEFIRST 7

select
Date,
dbo.ISOWEEK(a.Date) ISOWEEK,
dbo.ISOWEEK2(a.Date) ISOWEEK2
from
(
select Date = getdate() union
select Date = convert(datetime,'2010/01/01')
) a


results for 2010/01/01 in:

ISOWEEK = 52
ISOWEEK2 = 53

SET DATEFIRST 1

select
Date,
dbo.ISOWEEK(a.Date) ISOWEEK,
dbo.ISOWEEK2(a.Date) ISOWEEK2
from
(
select Date = getdate() union
select Date = convert(datetime,'2010/01/01')
) a


results for 2010/01/01 in:

ISOWEEK = 53
ISOWEEK2 = 53

Thursday 5 March 2009

Migration of Cognos7.4 Transformer to Cognos8.4

Today I needed to migrate Cognos Transformer 7.4 models to Cognos 8.4.
The iqd's have been created with the externalize method in Framework Manager.
Series 7 IQD bridge is not installed.

First test was to run a model by using the Open With ... function (together with browsing to
the cogtr.exe application). That failed:

That's a confusing error message because Cognos.INI is not used anymore in this version (it's the CS7G.ini, where I entered the correct database connection strings).
Off course, the batch file kicked off in SSIS containing "c:\program files\cognos\c8\bin\cogtr.exe" -n -s -kLOGON=%DBCRED%"d:\cube\cube model\model.mdl" also failed.

After that I opened the application cogtr.exe itself. From Transformer I browsed to a model and opened it. Guess what? No problems where shown. This means, the following change in the SSIS-execute process task should have the same impact:

Executable: c:\program files\cognos\c8\bin\cogtr.exe
Arguments: -n -s -kLOGON=%DBCRED%"d:\cube\cube model\model.mdl
WorkingDirectory: c:\program files\cognos\c8\bin\

This change was successful indeed.
Honestly, I have no clue about the root cause of this misbehaviour.
Anyway, the 'workaround' works.

Wednesday 4 March 2009

DWH Maintenance team: Be aware of your Points of Failure

I met a lot of situations where the end user is notifying the Data Warehouse administration team that certain processes did not run while it should be the maintenance team itself who should notice the error (and communicate to the business about the problem, including an estimated time to fix).

To understand which connections, flows, objects or systems can bring the entire data flow to a halt you should first determine every single point of failure within the whole architecture (like network availability, database availability, certain constraints in extraction scripts, file transfer scripts etc.). Although you should minimize the number of points of failure, the number of layers in a Data Warehouse environment results in several points of failure. As long as these are known, monitored and ´how-to-repair´ documentation is available and used, the change on a total halt for an unacceptable time is minimized.

As you can define almost an unlimited number of point of failures you should focus on system level (like free disk space), application level (like the availability of Mirror jobs) and data flow level (like File Transfer processes and error handling tables with non- loaded data).

Don’t understimate the number of point of failuers. In the graphical overview below a Data Warehouse environment is presented.
To be analyzed possible single points of failure are marked with a sign. Also the system itself which includes these point of failures can be defined as a point of failure.

After detecting these point of failures, you should determine whether and how these points of failures needs to be monitored.
Be aware that you, as an administrator, only migrate the ETL, copy and frontend scripts if the developer transfers fail-over documentation, so you know what to do in case of failures.

Adding a trend line to your report

Trend lines are used in business analytics to show changes in data over time. Trend lines are often used to argue that a particular action or event (such as the first press major press releases about the current financial crisis) caused observed changes at a point in time.

This is a simple technique, and does not require a sophisticated analysis technique. However, most reporting tools do not have the feature to add trend lines in charts (like MS-Excel has).

You can add the linear regression formula in your reporting tool as a calculated field. You shouldn't do this as every time the report is executed the formula has to be calculated. Besides this, the formula is a performance killer. Rather than this, define the y-values of the trendline in your ETL process to update it in your Data Warehouse.

The formula used is: y' = mx +b where m is the slope of the line and b is the y-intercept, which is the y-coordinate of the point where the line crosses the y axis. This can be seen by letting x = 0, which immediately gives y = b. In mathematic equation, y'=

To clarify this formula, take a look at the following example:

x y

1 20883

2 24260

3 21678

4 27094

gives:
x = {1,2,3,4},
y = {20883,24260,21678,27094),
average y= (20883+24260+21678+27094)/4 = 23478.75
average x=(1+2+3+4)/4 = 2.5
total xy = (20883*1)+(24260*2)+(21678*3)+(27094*4) = 242813
total x = 1+2+3+4=10
total x*x= (1*1)+(2*2)+(3*3)+(4*4) = 10
results in a slope of 1605.1 and a y-intercept of 19466, so the y'-coordinates will be:

x y'
1 21071.1

2 22676.2
3 24281.3
4 25886.4
If above example is the revenue for a certain salesperson and you want to visualize a list of salespersons starting with the one with the most declining trend in revenue, you have to use the slope to set the order property. In mathematic equation, the slope =

The slope is used to describe the steepness, incline, gradient, or grade of the trend line. A higher slope value indicates a steeper incline. The slope is defined as the ratio of the "rise" divided by the "run" between two points on the line, or in other words, the ratio of the altitude change to the horizontal distance between any two points on the line:


Here you find the Sql script (based on SQL Server) to load the trend data":

CREATE TABLE
[dbo].[BM_Trend]
(
[Salesperson] [nvarchar(25)] NULL,
[Week] [tinyint] NULL,
[y] [numeric](18, 0) NULL,
[x] [numeric](6, 0) NULL,
[x2] [int] NULL,
[xy] [int] NULL,
[totalx] [int] NULL,
[totaly] [numeric](18, 0) NULL,
[totalx2] [bigint] NULL,
[totalxy] [numeric](18, 0) NULL,
[avgx] [float] NULL,
[avgy] [float] NULL,
[maxx] [int] NULL,
[slope] [float] NULL,
[intercept] [float] NULL,
[ytrend] [float] NULL)

insert into bm_trend
(
, salesperson
, week
, y
, x
, x2
, xy
)
select
Week
, sum(fs.value) y
, ROW_NUMBER() OVER(PARTITION BY fs.
salesperson
ORDER BY fs.week) x
, ROW_NUMBER() OVER(PARTITION BY fs.salesperson
ORDER BY fs.week) *
ROW_NUMBER() OVER(PARTITION BY fs.salesperson
ORDER BY fs.week) x2
, ROW_NUMBER() OVER(PARTITION BY fs.salesperson
ORDER BY fs.week) *
sum(fs.value) xy
from BM_SalesFacts FS
group by fs.salesperson, fs.week

update bm_trend
set totalx = a.totalx
, totaly = a.totaly
, totalx2 = a.totalx2
, totalxy = a.totalxy
, avgx = a.avgx
, avgy = a.avgy
, maxx = a.maxx
from
(select
salesperson
, sum(x) totalx
, sum(y) totaly
, sum(x2) totalx2
, sum(xy) totalxy
, avg(x) avgx
, avg(y) avgy
, max(x) maxx
from
bm_trend
group by salesperson)a
where a.salesperson = bm_trend.salesperson

update bm_sectortrend
set slope = a.slope
, intercept = a.intercept
, ytrend = a.ytrend
from
(select
salesperson
, x
, case when totalx2-totalx*avgx = 0 then 0 else (totalxy-totalx*avgy)/(totalx2-totalx*avgx) end slope
, avgy-(case when totalx2-totalx*avgx = 0 then 0 else (totalxy-totalx*avgy)/(totalx2-totalx*avgx) end) *avgx intercept
, (case when totalx2-totalx*avgx = 0 then 0 else (totalxy-totalx*avgy)/(totalx2-totalx*avgx) end)*x+
avgy-(case when totalx2-totalx*avgx = 0 then 0 else (totalxy-totalx*avgy)/(totalx2-totalx*avgx) end) *avgx Ytrend
from
bm_trend
)a
where a.salesperson = bm_trend.salesperson
and a.x = bm_trend.x


Now you can use column Ytrend for the trendline in the chart and you can use column slope to order the salespersons in a list.