- Utilizing the combined technical knowledge of Mac Process in the USA and Clyde Process in the UK, can call upon over 40 years of test work which has provided detailed sizing information for over 600 dusts. Furthermore, if data for a particular dust is not listed, then Schenck Process can test it in their state of the art Filtration TestCenter.
- CONTENTS LS-OPT Version 2 ix 9.2.4 Normal Distribution.111.
- Items may be submitted electronically to admissions@lstc.edu or via postal service to LSTC’s Admissions Office, 1100 E. Chicago, IL 60615. Application for Admission. Save this document to your hard drive and open/complete in Acrobat Reader, not your browser. Mac users should use Adobe Acrobat, not Preview software.
This page is will be deleted pending it's creation on the CC wiki. |
View and download the user manual for your McCulloch product. Enjoy the memories. Memories intelligently curates your photos and videos to find trips, holidays, people, pets, and more, then presents them in beautiful collections and Memory Movies—complete with theme music, titles, and cinematic transitions.
- 2Version Selection
- 2.1Graham Cluster
- 2.2Legacy Clusters
- 2.2.2Module
- 3Job Submission
- 3.2Legacy Clusters
- 4Example Job
- 4.1Graham (default modules)
- 4.2New Orca (legacy modules)
- 4.2.2MPI Job
- 5General Notes
- 5.5Check License Status
- 6References
LSDYNA |
---|
Description: Suite of programs for transient dynamic finite element program |
SHARCNET Package information: see LSDYNA software page in web portal |
Full list of SHARCNET supported software |
Before a research group can use LSDYNA on sharcnet, a license must be purchased directly from LSTC for the sharcnet license server. Alternately, if a research group resides at a institution that has sharcnet computing hardware (mac, uwo, guelph, waterloo) it maybe possible to use a pre-existing site license hosted on an accessible institutional license server. To access and use this software you must open a ticket and request to be added to the sharcnet lsdyna group.
Graham Cluster
License
Create and customize this file for your license server, where XXXXX should be replaced with your port number and Y with your server number:
Module
Lstc Mac Process Manual Software
The modules are loaded automatically in the testomp.sh and testmpi.sh script shown below in the example section:
For single node smp jobs the available versions can be found by doing:
To load any version do for example:
For multi node mpp jobs the available versions can be found by doing:
To load the 7.1.X modules do the following:
To load the 8.X or 9.X modules do:
To load the 10.X modules do:
Legacy Clusters
License
Once a license configuration is established, the research group will be given a 5 digit port number. The value should then be inserted into the appropriate departmental export statement before loading the module file as follows:
o UofT Mechanical Engineering Dept
o McGill Mechanical Engineering Department
o UW Mechanical and Mechatronics Engineering Dept
o Laurentian University Bharti School of Engineering
o Fictitious Example:
Module
The next step is load the desired sharcnet lsdyna module version. Check which modules are available by running the module avail command then load one the modules as shown:
r9.0.1 versions
o For serial or threaded jobs:
o For mpi jobs:
r8.0.0 versions
o For serial or threaded jobs:
o For mpi jobs:
r7.1.2 versions
o For serial or threaded jobs:
o For mpi jobs:
r7.1.1 versions
o For serial or threaded jobs:
o For mpi jobs:
r6.1.1 versions
o For serial or threaded jobs:
o For mpi jobs:
where r611.80542 provides lsdyna_s only.
ls980 versions
o For serial or threaded jobs:
o For mpi jobs:
Note1) restart capability for ls980smpB1 and ls980mppB1 is not supportedNote2) the module using legacy openmpi/intel/1.4.5 will run extremely slow
Graham Cluster
The submission scripts myompjob.sh and mympijob.sh for the airbag problem are shown in the Example Job section below, for both graham and orca. Please note that you should specify your own username if submitting to a def account (not roberpj). Alternatively you could specify your resource allocation account:
Sample threaded job submit script:
Sample mpi job submit script:
Legacy Clusters
When loading a lsdyna sharcnet legacy module on the new orca or a sharcnet legacy system, the single or double precision solvers are specified with lsdyna_s or lsdyna_d respectively, as shown in the following sqsub commands:
1cpu SERIAL Job
4cpu SMP Job
If using an explicit solver, one can specify a conservative initial memory setting on the command line as follows:
where memory is the minimum number of 8 byte words shared by all processors in double precision, or, 4 byte words in single precision.
The initial value can be determined by starting a simulation interactively on the command line, and finding the output resembling: Memory required to begin solution : 754414. The number of words can be specified as memory=260M instead of memory=260000000, for further details see https://www.d3view.com/2006/10/a-few-words-on-memory-settings-in-ls-dyna/.
8cpu MPI Job
The initial memory can also be specified for mpi jobs on the sqsub command line with 'memory=' for the first master processor to decompose the problem and 'memory2=' used on all processors including the master to solves the decomposed problem, where the values are specified as 4 bytes per word in single precision and 8 bytes per word in double precision. The number of words can be specified as memory=260M instead of memory=260000000 OR memory2=260M instead of memory2=260000000 for further details see https://www.d3view.com/2006/10/a-few-words-on-memory-settings-in-ls-dyna/. The initial values can be found by running simulation interactively on a orca compute node and checking the output for a line such as: Memory required to begin solution (memory= 464898 memory2= 158794 ) which could then be implemented for a job run in the queue by doing:
The specification of memory2 on sharcnet or compute canada clusters is not beneficial since the queue reserves the same memory per core for all nodes, such that the decomposition master node process cannot be allocated to have larger system memory. Therefore its sufficient to specify a single memory parameter by doing:
where the LSTC_MEMORY variable will only allow the memory to grow for explicit simulations. The following slurm examples demonstrate how to prescribe the memory parameters exactly.
Copy the airbag example to your account:
Graham (default modules)
Please note that graham does not have the sharcnet legacy modules installed on it.
Threaded Job
Sample submission script mysmpjob1.sh for 4 core single precision smp job using compute canada default cvmfs modules:
where memory = 4000M / 4 (bytes/word) = 1000M words
Mpi Job
Sample submission script mympijob2.sh for 4 core double precision mpp job using compute canada default cvmfs modules:
where memory = 4000M / 8 (bytes/word) = 500M words
New Orca (legacy modules)
Please note, while new orca has most compute canada cvmfs modules available by default it does not have the graham the ls-dyna or ls-dyna-mpi modules installed. Therefore currently the only way to run lsdyna is by using the legacy sharcnet lsdyna modules as shown in the following two smp and mpi examples.
Threaded Job
Submission script mysmpjob3.sh to run 16 core single precision single node smp job with sharcnet legacy modules:
where memory = 4000M / 8 (bytes/word) = 500M words
MPI Job
To run the sharcnet legacy module lsdyna/mpp/r712.95028 efficiently over several compute nodes with many cores requires using orca's xeon nodes as shown in mympijob4.sh below. To run lsdyna on orca's opteron nodes with module lsdyna/mpp/r712.95028 requires using only one opteron node as shown in mympijob5.sh below. The other legacy sharcnet lsdyna mpp modules, as shown in the Availability Table lsdyna/mpp/ls971.r85718, lsdyna/mpp/ls980B1.011113, lsdyna/mpp/r611.79036, lsdyna/mpp/r611.80542, lsdyna/mpp/r711.88920, lsdyna/mpp/r800.95359 and lsdyna/mpp/r901.109912 have yet to be tested. If you want to use one but cannot get it working please open a ticket. Note that lines containing a space between '# and SBATCH' are comments. To activate such line(s) required removal of the space ie) '#SBATCH'. Its recommended to use module ls-dyna-mpi on graham instead since the machine is far newer, faster and reliable.
Multi Node Slurm Script
Single Node Slurm Script
where memory = 1000M / 4 (bytes/word) = 250M words
Legacy Clusters
STEP1) The following shows sqsub submission of the airbag example to the mpi queue. Its recommended to first edit airbag.deploy.k and change endtim to 3.000E-00 so the job runs long enough to perform the restart in steps 2 and 3 below:
STEP2) With the job still running, use the echo command as follows to create a file called 'D3KIL' that will trigger generation of restart files at which point the file D3KIL itself will be erased. Do this once a day if data loss is critical to you OR once or twice just before the sqsub -r time limit is reached. Further information can be found here http://www.dynasupport.com/tutorial/ls-dyna-users-guide/sense-switch-control:
STEP3) Before the job can be restarted, the following two lines must be added to the airbag.deploy.restart.k file:
STEP4) Now resubmit the job as follows using 'r=' to specify the restart file:
Graham Cluster
There are no special notes at the present time in regards to using lsdyna on graham.
Command Line Use
Following the upgrade of orca to the compute canada module stack, the orca development nodes were discontinued.
Therefore this section is no longer relevent and remain for historical purposes only.
Parallel lsdyna jobs can be run interactively on the command line (outside the queue) for short run testing purposes as follows:
o Orca Development Node MPP (mpi) Example
o Orca Development Node SMP (threaded) Example
Memory Issues
A minumum of mpp=2G is recommended although the memory requirement of a job may suggest much less is required. For instance setting mpp=1G for the airbag test job above will result in the following error when running a job in the queue:
Another error message which can occur after a job runs for a while if mpp is chosen to small is:
To get an idea of the amount of memory a job used run grep on the output file:
Version and Revision
Again run grep on the output file to extract the major and minor revision:
Check License Status
Running and Queued Programs
To get a summary of running and/or queued jobs use the lstc_qrun command as follows, where queued means the job has started running according to the sqsub command but its actually sitting waiting for license resources to come available. Once these are acquired the job will start running on the cluster and appear as a Running Program according to lstc_qrun ie)
To show license file expiration details, append '-R' to the command:
Killling A Program
The queue normally kills lsdyna jobs cleanly. However its possible that licences for a job (which is no longer running in the queue and therefore no longer has any processes running on the cluster) will continue to be tied up according to the lstc_qrun command. To kill such a program determine the pid@hostname from the lstc_qrun command then run the following kill command. The following example demonstrates the procedure for username roberpj:
Legacy Instructions
The following binaries remain available on saw and orca for backward compatibility testing:
There are currently no sharcnet modules for these versions, hence jobs should be submitted as follows:
Note! the ls971_s_R3_1 and ls971_d_R3_1 binaries do not work, a fix is being looked for.
o LSTC LS-DYNA Homepage
http://www.lstc.com/products/ls-dyna
o LSTC LS-DYNA Support (Tutorials, HowTos, Faq, Manuals, Release Notes, News, Links)
http://www.dynasupport.com/
o LS-DYNA Release Notes
http://www.dynasupport.com/release-notes
o LS-PrePost Online Documentation FAQ
http://www.lstc.com/lspp/content/faq.shtml
o LSTC Download/Install Overview Page
Provides links to binaries for LS-DYNA SMP/MPP, LS-OPT, LS-PREPOST, LS-TASC.
http://www.lstc.com/download
Memory Links
o Implicit: Memory notes
http://www.dynasupport.com/howtos/implicit/implicit-memory-notes
o LS-DYNA Support Environment variables
http://www.dynasupport.com/howtos/general/environment-variables
o LS-DYNA and d3VIEW Blog (a few words on memory settings)
http://blog2.d3view.com/a-few-words-on-memory-settings-in-ls-dyna/
o Convert Words to GB ie) memory=500MW (3.73GB)
http://deviceanalytics.com/memcalc.php
During my second Master internship, under the guidance of Dr. Marc Baaden, Dr. Serge Perez and Dr. Anne Imberty.This internship project aimed to develop new kind of standardized visualizations for complex carbohydrate molecules.
Background
In Nature, carbohydrates form an important family of biomolecules. Carbohydrates, in the form of polysaccharides, glycopeptides, glycolipids, glycosaminoglycans, proteoglycans, or other glycoconjugates have long been recognize to participate in many biological processes. They can be present in very diverse and complex forms. In a similar fashion to proteins with amino acids, we can build complex carbohydrates from individual units: monosaccharides. But whereas the proteic alphabet is made of “only” 20 letters, around 120 monosaccharides are known and the fact that they can be linked through different positions adds even more complexity to the whole construction process.
Methods
Using the molecular visualization software UnityMol[1] engine it is easy to represent proteins, RNA/DNA and biological networks. UnityMol is based on a new molecular representation called HyperBalls[2] properties to draw very large complexes of several thousand atoms without screen lag. based on the Unity3D video game that uses graphical and shader properties to draw very large complexes of several thousand atoms without screen lag.
This software is developed within Marc Baaden’s team and is currently updated by Marc’s PhD student : Sebastien Doutreline. A lot of great features and performance improvements to come!
The software is available on the SourceForge repo : http://sourceforge.net/projects/unitymol/. UnityMol is also multi platform (Linux, Mac & Windows)
Feel free to download the tutorial too if you feel lost (it’s still a development interface).
Results
Fig 1 – Pentasaccharide LSTc (Glc,Gal,GlcNAc, Gal, NeuAc). Ligand of the Polyomavirus capsid protein (3NXD)
The first objectif was to integrate a color code for each monosaccharide. Serge Perez developed a colored pictogram code for every monosaccharide (http://glycopedia.eu/120-Monosaccharides). We wanted to integrate this color code to UnityMol. On the Fig 1 you can see an example of this color code with the pentasaccharide LSTc. You can find the correspondence of every color inside the Tutorial for SweetUnityMol.
Pentasaccharide LSTc in Licorice representation and RingBlending mode enabled.
The second representation, RingBlending, aims to fill aromatic rings of monosaccharide. this feature is inspired by a feature in the software VMD called « PaperChain » [3] (Cf. Fig 2). The monosaccharide color code is also implemented.
A third representation, called SugarRibbons, aims to reproduce the secondary structure of a carbohydrate molecule. Like the « cartoon » representation for proteins, SugarRibbons will follow the backbone of the polysaccharide and will be thicker arround the aromatic ring (Cf. Fig 3). The monosaccharide color code is also implemented and you can represent the position of the oxygen by a colored sphere (to have a better view of the monosaccharide conformation). This representation fit the atoms position, so despite the approximation of the visualization of the secondary structure, the SugarRibbons visualization stay very close to the polysaccharide structure and we can still be able to see atoms fluctuation between 2 structures.
Pentasaccharide LSTc in SugarRibbons mode and oxygen atoms represented by red spheres.
More images and examples can be seen on my Gallery page.
Ressources
- You can find more information on the Glycopedia website, maintened by Serge Perez : http://glycopedia.eu/Presentation-200.html.
- Or on our article :10.1093/glycob/cwu133
References
[1] Chavent, M., Vanel, A., Tek, A., Levy, B., Robert, S., Raffin, B., & Baaden, M. (2011). GPU-accelerated atom and dynamic bond visualization using hyperballs: a unified algorithm for balls, sticks, and hyperboloids. Journal of Computational Chemistry, 32(13), 2924–35. doi:10.1002/jcc.21861
[2] Lv, Z., Tek, A., Da Silva, F., Empereur-mot, C., Chavent, M., & Baaden, M. (2013). Game on, science – how video game technology may help biologists tackle visualization challenges. PloS One, 8(3), e57990. doi:10.1371/journal.pone.0057990
Lstc Mac Process Manual Download
[3] Cross, S., Kuttel, M. M., Stone, J. E., & Gain, J. E. (2009). Visualisation of cyclic and multi-branched molecules with VMD. Journal of Molecular Graphics & Modelling, 28(2), 131–9. doi:10.1016/j.jmgm.2009.04.010
Lstc Mac Process Manual User
Contributors : Marc Baaden, Matthieu Chavent, Alex Tek, Zhihan Lv, Franck Da Silva, Charly Empereur-mot, Sebastien Doutreline, Mikael Trellet, Caroline Roudier.