Contents
LCD Reconstruction
Status
Recent Updates
Future(?)
Data Repository
Current Status
Problems/Potential Solutions

LCD Reconstruction Status
Tony Johnson
SLAC – 11-July-2000

Slide 3

Reconstruction Road Map

Reconstruction
Track Reconstruction
Track Finding uses M.Ronan’s (TPC) pattern finding
Tuned for Large + Small detector
Track Fitters:
SLD Weight Matrix Fitter
Can do Single Detector or Combined fit (e.g. VTX+TPC)
Hit Smearing/Efficiency (since Gismo gives “perfect” hits)
Random Background overlay
What’s still needed:
More Track Finding Algorithms (Cheater, Projective Geometry)
End Cap tracking, Hit Merging
Kalman Filter (using FNAL MCFast fitter) – partially done?
Fortran not Java, will need native library for each platform

Reconstruction cont.
Cluster Finding
Three Clustering Algorithms Currently Implemented
Cluster Cheater (uses MC truth to “cheat”)
Simple Cluster Builder (Touching Cells)
Radial Cluster Builder
All algorithms tend to produce many very low energy clusters - important to set sensible thresholds
Still Needed - Cluster Refinement Stage
Combine HAD + EM clusters
Endcap + Barrel overlap region
In Progress - Track Cluster Association
Initial Implementation Done by Mike Ronan
Output Format defined by Gary Bower
Need to Extend Definition of Clusters
Directionality, Entry point to calorimeter

Physics Utilities
Physics Utilities
4-vector, 3-vector classes
Event shape/Thrust finder
Jet Finder
Jade and Durham algorithms implemented
Extensible to allow implementation of other algorithms
Contrib. Area
Analysis Utilities and sample analyses provided by users
2 Event Display’s
2D - Suitable for debugging reconstruction and analysis
Wired for full 3D support
Particle Hierarchy Display

2D Event Display

Event Display

Event Display

Event Display

Wired (M. Donszelmann – CERN)

Code Availability
Reconstruction
Code recently moved to CVS for universal access
Browse CVS repository on Web
Connect with you favorite CVS client
Platform independent make (jmk) now used.
Most development currently done on NT
Now Unix development should be easy too

Recent Work (ongoing)
Switch to SIO format (replaces ASCII file)
SIO reading working
ToDo:
SIO recon output format needs to be defined
SIO writing needs to be completed
Converter utilities need to be upgraded
Use XML geometry description
Not yet done (workaround available)
Support for S2, L2 (+ old) detectors
Retune recon for new geometries
Need cell merging utility
Need to decide on standard parameters for batch running
Background Overlays
Diagnostic Histograms

Beam Background Overlays
(Gary Bower)
Take output from Guinea Pig beam
Feed events into full Gismo simulation
Build library of simulated background bunches
Overlay backgrounds on signal events at start of reconstruction
Adjust timing of hits (for TPC e.g.)
Combine (add) energy in calorimeter cells
Allows to change #bunches/train, bunch timing
ToDo
Ability to overlay events
Time shifts in TPC, Merge hits in calorimeter

Background Overlays
L1 Detector
4 Tesla field
1cm beampipe +- 5cm from IP
Primary particles from single bunch crossing
90k particles/bunch
95 bunches/train

Quality Control (Ron Cassell)
Standard set of diagnostic plots for checking generator/simulation/reconstruction output.

Track Reconstruction Efficiency
(Wolfgang Walkowiak)
Obtained efficiencies: (all cuts)
Pythia+bms udscb samples
Low efficiency at low momentum.
Forward disks missing in reconstruction.
Problems with e.g. Ks0 and L decay vertices.

Recon To Do List
Finish integration of MCFast Kalman filter
Tracker hit merging
Support for merging signal/backgrounds
Additional Track Finders (projective, “cheater”)
Improved Cluster Description
Track/Cluster Association
Cluster Refinement
Vertex Finding
Define recon output structures
Support for SIO format writing/SIO data converters
Switch to XML based geometry description
Tune recon for S2/L2
Define “standard” recon for batch running
Small Angle Tracking

Some Observations
In last year
Some (small) progress on infrastructure
SIO, background overlays, WIRED event display, diagnostic histograms
No progress on core reconstruction program
Original authors have all become involved in other things.
Many things remain to be done
Very little usage of full simulation/recon package
Mike Ronan, Wolfgang Walkowiak only users
Progress requires detailed analysis

More Observations
For FastMC we have root + jas versions
FastMC is compartively simple
Since it runs very fast it makes sense to have it closely tied to analysis tool
Current reconstruction (and simulation) is not tied to any particular analysis system
Can be used with Root or with JAS
SIO file can be converted to root or .lcd (JAS)
Recon package tied to analysis system makes less sense
Plenty to be done on existing recon package
Please don’t create another one

We Have to Choose:
Abandon full recon and concentrate on Fast MC simulation
Is full MC without recon very useful?
Find more manpower to work on recon and analysis (not to mention simulation)
Either find more manpower at SLAC
Try to get collaborators more closely involved
To choose we need to define clearer goals for recon/simulation effort
What are we try to achieve
On what timescale
With what manpower?

LCD Data Repository
Tony Johnson
SLAC – 11 July 2000

Current Status
Data Repository for LCD data is at Penn
Thought to be a good idea (politically) to not have it at SLAC
Penn has a large AIX system set up as part of their “National Scalable Cluster Project”
Bob Hollebeek and Don Benton

What Currently Exists?
Data Repository (sp05.hep.upenn.edu)
(semi) Automatic FTP transfer program
ASCII (SIO) data is transferred to Penn
Small files concatenated together
Automatically converted to Root and .lcd (JAS) format at Penn
Approximately 50GB(???) of data
All data is accessible via FTP
JAS server runs at Penn to support client-server JAS mode.

Problems
The system is not as reliable as could be hoped for
System is rarely used, so when it is used it often isn’t working
Datasets not always successfully converted
Too little communication
Result:
Most people use SLDNT0 instead
Never really intended to be a production system
Doesn’t have enough disk space
Hard to diagnose memory problems on NT machine

What Should be Done?
If we want distributed collaboration to work we need regular organizational meetings.
Minimum
Monthly video/phone meetings
2-4 face-to-face meetings per year
We should set up Data Repository at SLAC
Need dedicated Unix machine
With disk space (and tape backup?)
Need automated tools to check status