Geometry.Net - the online learning center
Home  - Computer - Parallel Computing
e99.com Bookstore
  
Images 
Newsgroups
Page 5     81-100 of 182    Back | 1  | 2  | 3  | 4  | 5  | 6  | 7  | 8  | 9  | 10  | Next 20

         Parallel Computing:     more books (100)
  1. Spatially Structured Evolutionary Algorithms: Artificial Evolution in Space and Time (Natural Computing Series) by Marco Tomassini, 2010-11-30
  2. Massively Parallel, Optical, and Neural Computing in Japan, (German National Research Center for Computer Scie)
  3. Applied Parallel Computing. Large Scale Scientific and Industrial Problems: 4th International Workshop, PARA'98, Umea, Sweden, June 14-17, 1998, Proceedings ... Notes in Computer Science) (v. 1541)
  4. Massively Parallel, Optical, and Neural Computing in the United States, by Robert Moxley, Gilbert Kalb, 1992-01-01
  5. Concurrent and Parallel Computing: Theory, Implementation and Applications
  6. Handbook of Parallel Computing: Models, Algorithms and Applications (Chapman & Hall/CRC Computer & Information Science Series)
  7. Tools and Environments for Parallel and Distributed Computing (Wiley Series on Parallel and Distributed Computing)
  8. Handbook of Sensor Networks: Algorithms and Architectures (Wiley Series on Parallel and Distributed Computing)
  9. Parallel Computing: Principles and Practice by T. J. Fountain, 2006-11-23
  10. Introduction to Parallel Computing by Ted G. Lewis, Hesham El-Rewini, 1992-01
  11. Parallel Algorithms and Cluster Computing: Implementations, Algorithms and Applications (Lecture Notes in Computational Science and Engineering)
  12. Grid Computing: The New Frontier of High Performance Computing, Volume 14 (Advances in Parallel Computing)
  13. Parallel I/O for High Performance Computing by John M. May, 2000-10-23
  14. Parallel, Distributed and Grid Computing for Engineering (Computational Science, Engineering & Tec)

81. Sivasubramaniam, Anand
Pennsylvania State University Computer architecture, operating systems, parallel computing, simulation and evaluation of computer systems.
http://www.cse.psu.edu/~anand/

82. PHAML
Fortran 90 code using adaptive refinement, multigrid and parallel computing to solve 2D linear elliptic PDEs. Successor to MGGHAT.
http://math.nist.gov/phaml/
PHAML
The Parallel Hierarchical Adaptive MultiLevel Project
[caption]

[Download]
[Goals] [Accomplishments] ... [Contact]
Software
PHAML is now available! PHAML version 0.9.21 can now be downloaded as the file phaml-0.9.21.tar.gz (570K) for Unix systems. When unpacked, it will place everything in a directory named phaml-0.9.21. LICENSE file.
Goals
The primary goal of the PHAML project is to produce a parallel version of MGGHAT . The target architecture is distributed memory multiprocessors, such as networked workstations, the IBM SP2, or Beowolf-type PC clusters like JazzNet MGGHAT is a sequential program for the solution of 2D elliptic partial differential equations using low or high order finite elements, adaptive mesh refinement based on newest node bisection of triangles, and multigrid. All aspects of the method are based on the hierarchical basis functions. Adaptive refinement, multigrid and parallel computing have each been shown to be effective means of vastly reducing the time required to solve differential equations. However, effectively combining all three techniques is not easy and is the subject of current research. PHAML is an attempt to solve this problem. There are several subgoals that must be addressed to achieve the primary goal:
partitioning adaptive grids
parallel adaptive refinement
parallel multigrid
distributed data structures
efficient portable parallel code
simple yet powerful user interface PHAML solution on four processors of an equation with a singular boundary condition. The colors or shades of grey indicate the region assigned to each processor.

83. Parallel Computing Tutorial
Tutorial on parallel computing, training in the concepts and practice of parallel computing. Taught at corporations, government agencies, conferences etc.
http://www.eecs.umich.edu/~qstout/tut/
Parallel Computing 101
Quentin F. Stout
Christiane Jablonowski

Parallel computers are easy to build it's the software that takes work. See us at Supercomputing SC'05, in Seattle, Sunday, November 13, 2005.
Abstract
This tutorial provides a comprehensive overview of parallel computing, emphasizing those aspects most relevant to the user. It is suitable for new or prospective users, managers, students and anyone seeking a general overview of parallel computing. It discusses software and hardware, with an emphasis on standards, portability, and systems that are commercially or freely available. Systems examined include clusters, the Grid, and tightly integrated supercomputers. The tutorial provides training in parallel computing concepts and terminology, and uses examples selected from large-scale engineering, scientific, and data intensive applications. These real-world examples are targeted at distributed memory systems using MPI, Grid systems using Globus, shared memory systems using OpenMP, and hybrid systems that combine the MPI and OpenMP programming paradigms. The tutorial shows basic parallelization approaches and discusses some of the software engineering aspects of the parallelization process, including the use of state-of-the-art tools. The tools introduced range from parallel debugging tools to performance analysis and tuning packages. We use large-scale projects as examples to help the attendees understand the issues involved in developing efficient programs. Examples include: crash simulation (a complex distributed memory application parallelized for Ford Motor); climate modeling (an application highlighting distributed, shared, and vector ideas with examples from NCAR, NASA and ECMWF); data mining (an I/O and data intensive application); space weather prediction (an adaptive mesh code scaling to well over 1000 processors, funded by NASA/DoD/NSF); and design of ethical clinical trials (a memory-intensive application funded by NSF). Attendees find the lessons convincing, and occasionally humourous, because we discuss mistakes as well as successes.

84. The Paradigm Group
Research in parallel computing with applications to geographic/spatial information systems. Projects and bibliography of papers.
http://www.scs.carleton.ca/~gis

HPCVL receives $5 million from the Canada Foundation for Innovation(CFI)
Ontario announces $25.4 M investment in HPCVL(High Performance Computing Virtual Laboratory) SUN Micro Systems donates 3D computer hardware to Carleton University $1 million grant from IBM recognizes Canadian research
HPCVL receives $5 million from the Canada Foundation for Innovation(CFI)
Ontario announces $25.4 M investment in HPCVL(High Performance Computing Virtual Laboratory) SUN Micro Systems donates 3D computer hardware to Carleton University $1 million grant from IBM recognizes Canadian research ... Contact us

85. Parallel Computing Resources
Resources in parallel computing, maintained by Quentin Stout.
http://www.eecs.umich.edu/~qstout/parlinks.html
Resources for Parallel Computing
This list is maintained at www.eecs.umich.edu/~qstout/parlinks.html where the entries are linked to the resource. Rather than creating a comprehensive, overwhelming, list of resources, I have tried to be selective, pointing to the best ones that I am aware of in each category. You can send me an email to suggest modifications - I'm at qstout umich edu
  • A slightly whimsical explanation of parallel computing.
  • Tutorials
    • From Edinburgh Parallel Computing Center (EPCC): On-line courses include MPI, HPF, Mesh Generation, Introduction to Computational Science, HPC in Business.
    • From NCSA: On-line courses include MPI, OpenMP, and multilevel parallel programming (MPI + OpenMP).
    • Parallel Computing 101 , a tutorial for beginning and intermediate users, managers, people contemplating purchasing or building a parallel computer, etc.
  • Distributed Systems Online , a thorough and up-to-date listing of parallel and supercomputing sites, books, and events, maintained by the IEEE Computer Society.
  • Newsgroups: comp.parallel, comp.sys.super (there are many others on more specialized topics)

86. Parallel Computing
parallel computing. ISSN 01678191; Publisher Elsevier Science. Elsevier Home Pages Europe US Japan Elsevier Science Anonymous FTP Server
http://elib.cs.sfu.ca/Collections/CMPT/cs-journals/P-Elsevier/J-Elsevier-PC.html
Parallel Computing
The Internet Electronic Library Project at SFU / Prof. Rob Cameron / cameron@cs.sfu.ca

87. Bernd Stramm Home Page
Computer architecture parallel computing, embedded systems, high performance computing, and heterogeneous parallel systems.
http://www.bernd-stramm.com
Bernd Stramm's Home Page
Home Future Computing Stuff General Interests Education ... Login Recent News We have a news feature [short] [full story] We have a news feature For today, the news is only that we have this news feature. It allows those with login access to the site (mostly, that's Bernd) to add news articles, which appear in 3 forms on the site:
  • A headline, which appears on most pages of the website in a small column. There is a maximum lengt... Close [Story] Some stuff about me: I am a computer scientist and engineer, with interests and experience in computer architecture (hardware and software), parallel computing, embedded systems, high-performance computing, heterogeneous parallel systems, models of computation, and software engineering. Useful in the pursuit of these interests has been my experience with object oriented design, performance evaluation, simulation, modeling, partitioning and mapping or parallel programs. On my general topics page, I explain a little more what these interests are. I have some education , and a few publications.

88. The 17th IASTED International Conference On Parallel And Distributed Computing A
Technical Committee on Parallel and Distributed Computing and Systems. CONFERENCE CHAIR parallel computing Cluster Computing Heterogeneous Computing
http://www.iasted.org/conferences/2005/phoenix/c466.htm
INFORMATION PRELIMINARY CALL FOR PAPERS
The 17th IASTED International Conference on
PARALLEL AND DISTRIBUTED COMPUTING AND SYSTEMS
~PDCS 2005~
November 14-16, 2005
Phoenix, AZ, USA
***Extended Deadlines***
SPONSORS
The International Association of Science and Technology for Development (IASTED)
Technical Committee on Parallel and Distributed Computing and Systems CONFERENCE CHAIR
TUTORIAL CHAIR
KEYNOTE SPEAKER Prof. Arun Somani Iowa State University, USA SPECIAL SESSION/WORKSHOP CHAIR SPECIAL SESSIONS/WORKSHOPS For more information on the special sessions or workshops, please see the following link: http://cactus.eas.asu.edu/WorkshopPDCS WORKSHOPS AT PDCS 2005: First International Workshop on Distributed Algorithms and Applications for Wireless and Mobile Systems ~DAAWMS 2005~: http://www.utdallas.edu/~nxm020100/daawms/

89. Prof. Frank Dehne
School of Computer Science, Carleton University parallel computing, Coarse Grained Parallel Algorithms, Parallel Data Mining and OLAP, Computational Geometry, Image Processing
http://www.scs.carleton.ca/~dehne/

90. CS 838 - Topics In Parallel Computing - Spring 1999
There is a couple of books on parallel algorithms and parallel computing you might find Introduction to parallel computing, The Benjamin/Cummings Publ.
http://www.cs.wisc.edu/~tvrdik/cs838.html
CS 838: Topics in Parallel Computing
Spring 1999
UNIVERSITY OF WISCONSIN-MADISON
Computer Sciences Department
Administrative details
Instructor: Pavel Tvrdik email: tvrdik@cs.wisc.edu Office: CS 6376 Phone: Office hours: Tuesday/Thursday 9:30-11:00 a.m. or by appointment Lecture times: Tuesday/Thursday 08:00-09:15 a.m. Classroom: 1221 Computer Sciences
The contents
  • Syllabus
  • Schedule and materials
  • Optional books
  • Course requirements ...
  • Grading policy
    The syllabus of the lectures
    The aim of the course is to introduce you into the art of designing efficient and analyzing parallel algorithms for both shared-memory and distributed memory machines. It is structured into four major parts.
    The first part of the course is a theoretical introduction into the field of design and analysis of parallel algorithms. We will explain the metrics for measuring the quality and performance of parallel algorithms with the emphasis on scalability and isoefficiency. To prepare the framework for parallel complexity theory, we will introduce a fundamental model, the PRAM model. Then we will introduce the basics of parallel complexity theory to provide a formal framework for explaining why some problems are easier to parallelize then some others. More specifically, we will study NC-algorithms and P-completeness.
    The second part of the course will deal with communication issues of distributed memory machines. Processors in a distributed memory machine need to communicate to overcome the fact that there is no global common shared storage and that all the information is scattered among processors' local memories. First we survey interconnection topologies and communication technologies, their structural and computational properties, embeddings and simulations among them. All this will form a framework for studying interprocessor communication algorithms, both point-to-point and collective communication operations. We will concentrate mainly on orthogonal topologies, such as hypercubes, meshes, and tori, and will study basic routing algorithms, permutation routing, and one-to-all as well as all-to-all communication operation algorithms. We conclude with some more realistic abstract models for distributed memory parallel computations.
  • 91. Parallel Computing Using Optical Interconnections
    An extremely important aspect of parallel computing is data communication among processors in 2.3 parallel computing with Intelligent Optical Networks .
    http://www.mcs.newpaltz.edu/~li/pcuoi.html
    Parallel Computing Using Optical Interconnections
    Edited by
    Keqin Li State University of New York at New Paltz
    Yi Pan University of Dayton
    Si Qing Zheng University of Texas at Dallas
    Kluwer Academic Publishers
    Published on September 25, 1998
    ISBN 0-7923-8296-X
    Preface
    Motivation
    The circuit density offered by VLSI technology provides the means for implementing systems with a large number of processors. According to today's standards, a massively parallel processing (MPP) system consists of hundreds or thousands of processors. To achieve teraflops performance, it is expected that more and more processors are incorporated into a single system. An extremely important aspect of parallel computing is data communication among processors in parallel computing systems. Due to two-dimensionality, I/O constraints, and electrical properties such as resistance, capacitance, and inductance of VLSI circuits, the VLSI technology is not suitable for interconnecting communication intensive systems. The advances in optical technologies have made it possible to implement optical interconnections in future MPP systems. Photons are non-charged particles, and do not naturally interact. Consequently, there are many desirable characteristics of optical interconnects, e.g., high speed (speed of light), increased fanout, high bandwidth, high reliability, supporting longer interconnection lengths, exhibiting low power requirements, and immunity to EMI with reduced crosstalk. Optics can utilize free-space interconnects as well as guided wave technology, neither of which has the problems mentioned for the VLSI technology. Optical interconnections can be built at various levels, providing chip-to-chip, module-to-module, board-to-board, and node-to-node communications.

    92. Home
    The Edinburgh parallel computing Centre provides computing resources to Edinburgh University and industry. Projects, technical support, and publications.
    http://www.epcc.ed.ac.uk/
    site maintained by webmaster@epcc.ed.ac.uk -select- About Us How we Work Areas of Expertise Current Projects Recent Projects Caledonia Project Archive Grid Computing Contacts -select- The Grid Advanced Computing Research Activities HPC-Europa Library -select- About EPCC Publications Media Coverage Staff How to Contact Us News Recruitment
    EPCC is a bridge to the world of advanced computing for industry, commerce and research. With consultancy, training and collaborative contracts, we can help you make the best of twenty-first century computing technology. Founded at the University of Edinburgh in 1990, EPCC is a leading European centre of expertise in advanced research, technology transfer and the provision of supercomputer services to Universities.

    93. 6.338J/18.337J Applied Parallel Computing

    http://beowulf.lcs.mit.edu/18.337/
    Frame support is needed to display this page.

    94. PCS'99 - 2nd International Conference On Parallel Computing Systems
    International Conference on parallel computing Systems. Ensenada, Mexico.
    http://www.cicese.mx/~pcs99/

    95. London - University Of Westminster - Cavendish School Of Computer Science
    Cavendish School of Computer Science. Research Centre for parallel computing; Industrial Control Centre; Centre for Microelectronic Systems Applications.
    http://www.wmin.ac.uk/cscs/
    text only print sign in You are not signed in Search: Home Courses Students Departments ...
    CLEARING - 2005
    Do you want to study a degree in Computing, Internet Computing or Software Engineering and have missed your grades? We may be able to help here at The Cavendish School of Computer Science.
    Contact us immediately on 020 7911 5000 and select option 1.
    New MSc in Mobile Computing
    Microsoft Day at Westminster - Report

    Contact us
    Help ... Staff directory
    University of Westminster, Headquarters, 309 Regent Street, London W1B 2UW, +44 (0)20 7911 5000

    96. Introduction To Parallel Computing
    Motivating Parallelism; Scope of parallel computing; Organization and Contents of the Text Issues in Sorting on Parallel Computers; Sorting Networks
    http://www-users.cs.umn.edu/~karypis/parbook/
    Introduction to Parallel Computing.
    Ananth Grama, Purdue University, W. Lafayette, IN 47906 (ayg@cs.purdue.edu Anshul Gupta, IBM T.J. Watson Research Center, Yorktown Heights, NY 10598 (anshul@watson.ibm.com George Karypis, University of Minnesota, Minneapolis, MN 55455 (karypis@cs.umn.edu Vipin Kumar, University of Minnesota, Minneapolis, MN 55455 (kumar@cs.umn.edu Follow this link for a recent review of the book published at IEEE Distributed Systems Online.
    Solutions to Selected Problems
    The solutions are password protected and are only available to lecturers at academic institutions. Click here to apply for a password. Click here to download the solutions (PDF File).
    Table of Contents ( PDF file
    PART I: BASIC CONCEPTS
    1. Introduction (figures: [ PDF PS
    • Motivating Parallelism
    • Scope of Parallel Computing
    • Organization and Contents of the Text
    2. Parallel Programming Platforms (figures: [ PPT PDF PS
    (GK lecture slides [ PDF ]) (AG lecture slides [ PPT PDF PS
    • Implicit Parallelism: Trends in Microprocessor Architectures
    • Limitations of Memory System Performance
    • Dichotomy of Parallel Computing Platforms
    • Physical Organization of Parallel Platforms
    • Communication Costs in Parallel Machines
    • Routing Mechanisms for Interconnection Networks
    • Impact of Process-Processor Mapping and Mapping Techniques
    • Bibliographic Remarks
    3. Principles of Parallel Algorithm Design (figures: [

    97. Introduction To Parallel Computing And Cluster Computers
    This is a very basic introduction to the world of parallel computing. I ve tried to provide all the information needed to get started.
    http://www.scl.ameslab.gov/Projects/parallel_computing/
    Introduction to Parallel Computing and Cluster Computers
    Dave Turner - Ames Laboratory
    turner@amelab.gov

    http://www.scl.ameslab.gov/Projects/parallel_computing/
    This is a very basic introduction to the world of parallel computing. I've tried to provide all the information needed to get started. There are also links to more additional information on more advanced topics. The last part is a basic introduction to designing and building cluster computers.

    98. SAL Parallel Computing
    parallel computing extends to systems with more processors to obtain speedup in code execution. The efficiency and effectiveness of the parallelism are
    http://sal.jyu.fi/C/

    99. OUP: Parallel Scientific Computation: Bisseling
    The first text to explain how to use BSP in parallel computing; Clear exposition of distributedmemory parallel computing with applications to core topics
    http://www.oup.co.uk/isbn/0-19-852939-2
    NEVER MISS AN OXFORD SALE (SIGN UP HERE) VIEW BASKET Quick Links About OUP Career Opportunities Contacts Need help? News oup.com Search the Catalogue Site Index American National Biography Booksellers' Information Service Children's Fiction and Poetry Children's Reference Dictionaries Dictionary of National Biography Digital Reference English Language Teaching Higher Education Textbooks Humanities International Education Unit Journals Law Medicine Music Online Products Oxford English Dictionary Reference Rights and Permissions Science School Books Social Sciences Very Short Introductions World's Classics Advanced Search UK and Europe Book Catalogue Help with online ordering How to order Postage Returns policy ... Table of contents
    Join the OED-BBC Wordhunt
    Visit www.oed.com for more details
    Parallel Scientific Computation
    A Structured Approach using BSP and MPI
    Rob H. Bisseling
    Publication date: 4 March 2004
    324 pages, frontispiece, 4pp colour plates, numerous line figures, 234mm x 156mm
    Ordering Individual customers
    order by phone, post, or fax

    Teachers in UK and European schools (and FE colleges in the UK):
    order by phone, post, or fax

    100. GridServer - Grid Computing For The Virtual Enterprise
    Commercial enterprise that purchases idle PC capacity and resells it to users with complex parallel computing tasks.
    http://www.datasynapse.com/
    CONTACT DOWNLOAD SUPPORT
    DataSynapse is a leading provider of grid computing solutions for the virtual enterprise. GridServer , our flagship software product, creates a highly scalable application architecture that enables the broadest set of enterprise applications to run on a shared computing infrastructure. Learn More FIX Organization
    Tests Fast Protocol BNP Paribas' global structured credit group has implemented DataSynapse's grid computing infrastructure software, GridServer. BNP Paribas Deploys
    Grid for Derivatives The grid has been deployed at BNP's London operations and will be rolled
    out to additional sites in New York and Tokyo, as well as across multiple business lines. Convergence of Grid, Web Services and SOA Eight Reasons GridServer Can Accelerate the
    Shift to SOA Software Architectures Will Evolve From SOA and
    Events to Service Virtualization Grid Computing for Insurance Joint Solution Overview: DataSynapse and Milliman Business Use Cases Grid Computing in the Financial Services and
    Insurance Sectors September 6, 2005

    Page 5     81-100 of 182    Back | 1  | 2  | 3  | 4  | 5  | 6  | 7  | 8  | 9  | 10  | Next 20

    free hit counter