Facilities
Rooms and Facilities
Department Rooms
| Room Number | Name / Purpose |
| 1106 Etcheverry | Conference Room |
| 1110B Etcheverry | Radiation Detection Teaching Lab |
| 4101 Etcheverry | Conference Room & Library |
| 4116 Etcheverry | Grad student offices |
| 4126 Etcheverry | Graduate Student GSI Office |
| 4147 Etcheverry | NE Student Lounge |
| 4149 Etcheverry | Student Services |
| 4151 Etcheverry | Visiting Scholars Office |
| 4155 Etcheverry | Department Main Office |
| 4167 Etcheverry | GoNERI-UCBNE Collaboration |
Research Labories and Centers
| Name | Room | Professor |
| 1110A Etcheverry | Norman | |
| Berkeley Applied Research for the Imaging of Neutrons and Gamma-rays | 1110 Etcheverry | Vetter |
| Berkeley Nuclear Research Center (BNRC) | 3115 Etcheverry | Vujic |
| Domestic Nuclear Threat Security (DoNuTS) |
1140 Etcheverry, 3115A Etcheverry |
Morse, Hochbaum, Norman, Siegrist |
| Nuclear Materials Lab |
1107 Etcheverry, 3107 Etcheverry |
Hosemann |
| Nuclear Waste Computational Lab | 4126B Etcheverry | Ahn |
| Reactor Design Group | 3115B Etcheverry | Greenspan |
| Renewable and Appropriate Energy Laboratory (RAEL) | Richmond Field Stations, BLDG 113 | Kammen |
| Thermal Hydraulics Lab | 4118 Etcheverry | Peterson |
Nuclear Waste Computational Lab
Professor Ahn’s research group utilizes LLNL supercomputer facilities, as well as a Sun work stations equipped with RELAP-5. The group also maintains extensive software for multi-dimensional compressible flow calculations. Macintosh computers with video acquisition cards are used for image processing, animation of numerical computation results, and graphics output. 6 Pentium II or III-based PCs, a Sparc 20 workstation, two Macintosh computers, a laser printer, and a scanner form the intranet for the Nuclear Waste Research Lab for analyses and computer-code development, including (1) groundwater flow models in heterogeneous fracture networks in geologic formations, (2) radionuclide transport models through engineered barriers of a geologic repository and through hosting geologic formations, (3) integrated models for repository performance assessment using object-oriented approach and parallel computing, using PVM, and (4) mass flow models of radioactive materials in a nuclear fuel cycle.
Nuclear Materials
Prof. Hosemann has acquired some metallography equipment donated from Los Alamos National Laboratory (Polishers, high speed cutting saw, etc.) as well as two high temperature multi zone tube furnaces. Advanced sample preparation equipment including a Buehler Vibramet were purchased with his new research funding. For materials characterization, a 1000°— optical microscope, a microhardness tester, and a high temperature high-low load nano-indenter were purchased. An environmental chamber from Micro Materials was also purchased. In collaboration with MSE a low low temperature (4_K) Instron tensile test machine is installed in 1140 Etcheverry. A Quanta dual beam focused ion beam instrument was purchased in collaboration with the Materials Science and Bio-Engineering departments. This instrument had a total cost of $ 1.2M. It will be located in the laboratory facility in Stanley Hall.
Equipment
Pelletron Accelerator
We have received a tandem pelletron accelerator with associated hardware from the Department of Homeland Security. The accelerator will be used to perform experiments in nuclear resonance fluorescence and positron annihilation spectroscopy in association with the research activities outlined above. The accelerator will be used to obtain NRF cross section data in SNM nuclei of interest. We are particularly interested in completing the database for 235U and 239Pu. The 3.5 MeV endpoint of this machine will be useful towards this end. The pelletron electron source has been run at full extraction voltage and found to be brought to focus with millimeter spot size. Another proposed use of the machine is for a proposed scheme to generate Doppler-compensated NRF photons using direct excitation of the target nucleus to be detected, such as 235U. Design calculations have been undertaken to explore the feasibility of electron excitation of the same nuclear level in an accelerator target as the level sought in the cargo’s nucleus of interest. This technique requires that the target nucleus be moving in order to balance the Mossbauer-type recoil effects for reabsorption on the similar nucleus.
A rotating target scheme has been examined to explore this possibility. Examples of rotating-target NRF in non-SNM applications have been found in the literature. We plan to continue these studies and plan to use the Pelletron for inelastic excitation measurements in 235U.
Radiation Detection and Imaging Laboratory (Bearing & Donuts, main lab)
We have established a new Radiation Detection and Imaging Laboratory housing projects focusing to the development of advanced radiation detection concepts in the detection of gamma rays and neutrons. These developments include the registration and correlation of visual and nuclear emission information in order to improve the sensitivity and currently available capabilities in the detection, localization, and tracking of radioactive sources. This laboratory is part of our new Bearing (Berkeley Advanced Radiation Imaging for Neutrons and Gamma Rays) effort and closely related to efforts within the DoNuTS project, such as the detector, electronics, and data analysis developments led by Prof. Siegrist from the Physics Department and feature extraction and data mining led by Prof. Hochbaum from the Industrial Engineering and Operations Research Department.
Experimental facilities include:
• Gamma-ray imaging laboratory
– Electron-tracking based Compton imaging instruments: High-resolution, scientific CCD, temperature variable cryostat, double-sided strip HPGe detector, and fully digital data acquisition system (including 10 8-channel, 16 bit, 100 MHZ waveform digitizer system).
– A Class-10,000 clean room for development, assembly, and characterization of semiconductor devices, including a probe station, a clean device storage area, and a Class-100 work bench.
– High-energy gamma-ray imaging instruments for radiography experiments consisting of custom-made, collimated 8x8 (5x5x50 mm3) BGO array and data acquisition system.
•Machine-Vision Radiation Detection System Setup
– Large-Area Coded-Aperture Imager consisting of 10x10 (10x10x10 cm3) NaI(Tl) array, a 2x2.5 m2 coded aperture, and a fully digital acquisition system (including 15 8-channel, 16 bit, 100MHz waveform
digitizers).
– Large, two-dimensional translational scanner to simulate two-dimensional movements in the FOV of the imaging instrument.
– Several additional NaI(Tl) detectors.
– Sets of video cameras to capture and track objects.
• Compact Compton Imager
– High-resolution Compton imaging instrument consisting of two large-volume HPGe and two large-volume Si(Li) detectors implemented in double-sided strip configuration, a fully digital acquisition system
(including 20 8-channel, 16 bit, 100 MHz waveform digitizers), and a video camera and photo camera, all integrated on a movable cart.
• Teaching Laboratory
– Several basic experimental stations including G-M and proportional counters, plastic, liquid, NaI(Tl) scintillations detectors, and three HPGe detectors as well as pulse-processing electronics and computers.
Thermal Hydraulics peterson
Thermal hydraulics research in Nuclear Engineering utilizes space on two floors of Etcheverry. These laboratories house several test loops for heat transfer experiments, including: (1) an electrically heated DowTherm test loop, (2) a six-meter vertical loop for scale testing of a Generation IV BWR design (under
construction), (3) an Inertial Fusion lithium waterfall simulator, and various two-phase flow experiments. Additional facilities have been set up to study properties of molten salt coolant (FLiBe). Modern flow visualization equipment, including high-speed CMOS motion picture cameras with computer interfaces,
have been installed in these devices, along with sophisticated multi-channel data acquistion equipment for multipoint temperature and pressure recording.
Computational Resources
DECF
The Davis Etcheverry Computing Facility (DECF)
DECF is an interdepartmental computing facility that provides instructional and research services to the departments of: Mechanical Engineering, Civil and Environmental Engineering, Industrial Engineering and Operations Research, Nuclear Engineering, Bio Engineering, and Materials Science and Engineering. DECF maintains two computing laboratories (1111 Etcheverry Hall and 1171 Etcheverry Hall) and 3 computing clusters.
The 1111 linux cluster consists of 18 Dell Precision 370n Workstations donated by Intel.
(Pentium 4, 3.08GHz, 1GB RAM, CD-RW, nVidia Quadro Video Card), 6 Dell Optiplex GX620 Workstations (Windows XP), 2 black & white and 1 color laser printers, and 1 Canon CanoLiDE 50 color scanner (Windows only). The 1171 linux cluster consists of 21 Precision T3400 Workstations (Intel Core 2 Quad, 2GB RAM), 3 Precision Workstation 370n donated by Intel in 2004 (Intel Pentium 4, 1GB RAM), and 2 Black & White HP Laserjet printers.
Department Berkelium Cluster
The initial configuration of the Berkelium Cluster was developed by Professor Wirth’s research group with assistance from NE system administrators, from hardware donated by LLNL.
The cluster configuration was partially asymmetric in order to allow for a variety of different simulations to run optimally, while still providing computing resources for simulations that often can only be run at larger facilities like NERSC. All of the machines had the same CPU, each with 8 cores, and a total of 4 CPUs per machine for an ultimate packing density of 32 cores per computer. Redundant power supplies were used in order to prevent accidental damage to the computer in the case of a power supply failure. One 320 GB hard disk drive is more then sufficient to store the Operating System, associated applications, MPI Libraries, Codes, and their associated data. A separate server is being used to store any data from the simulations. Two of these computers have a total of 256GB of memory installed them. This will allow for large simulations and calculations that are often not possible on other machines due to the memory requirements. Four have 128 GB of memory for intermediate sized simulations. The other computers each have 64GB of memory, which equates to 2 GB per core. These are provisioned for more traditional simulations that are often bound by CPU time and not overall memory. Still, all of the machines will be available on similar queues and the higher memory computers will be interchangeably used for all simulations. The real advantage this cluster will offer over our previous clusters is the use of the Infiniband network interconnects. Quad Data Rate (QDR) Adapters will allow up to 40 GB per second of data to be transferred between computers to allow for simulations that have to exchange large amounts of information to run speedily. These inter-connects also offer latencies that are on the order 1/100th the latency of traditional Ethernet Networking. Many modern codes are bound by MPI Latencies which scale with the interconnect latency. The Ethernet network serves as a management interface to keep the Infiniband network dedicated to simulations. The cluster initially run CentOS 5.5 for i386, a free rebuild of Red Hat Enterprise Linux 5.5 RHEL 5.5). The PAE kernel was used, which allows for files larger than 2 GiB. The file server runs CentOS 5.4 with x86 64 architecture; the 64-bit kernel allows native support for large files, but this architecture is not available on the Livermore nodes.
In addition to the initial hardware donation by LLNL, Professor Wirth was able to secure the DOE NEUP Infrastructure Grant of $150,000. This funding was used to buy an upgrade for the existing PowerWulf Compute Engine. The upgrade includes 576 CPU Cores, 1,504 GB System RAM and a QDR InfiniBand interconnect. Each computer (12 of them) has 48 processors, so that communication between processors should be should be extremely fast. Minimum RAM is 2 GiB per core, with one computer having 96 GiB total as a minimum (two have 192 GiB/node and one more has 256 GiB/node).

