3%
Algorithms and Parallel Computing

Algorithms and Parallel Computing

          
5
4
3
2
1

Out of Stock


Premium quality
Premium quality
Bookswagon upholds the quality by delivering untarnished books. Quality, services and satisfaction are everything for us!
Easy Return
Easy return
Not satisfied with this product! Keep it in original condition and packaging to avail easy return policy.
Certified product
Certified product
First impression is the last impression! Address the book’s certification page, ISBN, publisher’s name, copyright page and print quality.
Secure Checkout
Secure checkout
Security at its finest! Login, browse, purchase and pay, every step is safe and secured.
Money back guarantee
Money-back guarantee:
It’s all about customers! For any kind of bad experience with the product, get your actual amount back after returning the product.
On time delivery
On-time delivery
At your doorstep on time! Get this book delivered without any delay.
Notify me when this book is in stock
Add to Wishlist

About the Book

There is a software gap between the hardware potential and the performance that can be attained using today's software parallel program development tools. The tools need manual intervention by the programmer to parallelize the code. Programming a parallel computer requires closely studying the target algorithm or application, more so than in the traditional sequential programming we have all learned. The programmer must be aware of the communication and data dependencies of the algorithm or application. This book provides the techniques to explore the possible ways to program a parallel computer for a given application.

Table of Contents:
Preface. List of Acronyms. 1 Introduction. 1.1 Introduction. 1.2 Toward Automating Parallel Programming. 1.3 Algorithms. 1.4 Parallel Computing Design Considerations. 1.5 Parallel Algorithms and Parallel Architectures. 1.6 Relating Parallel Algorithm and Parallel Architecture. 1.7 Implementation of Algorithms: A Two-Sided Problem. 1.8 Measuring Benefi ts of Parallel Computing. 1.9 Amdahl's Law for Multiprocessor Systems. 1.10 Gustafson-Barsis's Law. 1.11 Applications of Parallel Computing. 2 Enhancing Uniprocessor Performance. 2.1 Introduction. 2.2 Increasing Processor Clock Frequency. 2.3 Parallelizing ALU Structure. 2.4 Using Memory Hierarchy. 2.5 Pipelining. 2.6 Very Long Instruction Word (VLIW) Processors. 2.7 Instruction-Level Parallelism (ILP) and Superscalar Processors. 2.8 Multithreaded Processor. 3 Parallel Computers. 3.1 Introduction. 3.2 Parallel Computing. 3.3 Shared-Memory Multiprocessors (Uniform Memory Access [UMA]). 3.4 Distributed-Memory Multiprocessor (Nonuniform Memory Access [NUMA]). 3.5 SIMD Processors. 3.6 Systolic Processors. 3.7 Cluster Computing. 3.8 Grid (Cloud) Computing. 3.9 Multicore Systems. 3.10 SM. 3.11 Communication Between Parallel Processors. 3.12 Summary of Parallel Architectures. 4 Shared-Memory Multiprocessors. 4.1 Introduction. 4.2 Cache Coherence and Memory Consistency. 4.3 Synchronization and Mutual Exclusion. 5 Interconnection Networks. 5.1 Introduction. 5.2 Classification of Interconnection Networks by Logical Topologies. 5.3 Interconnection Network Switch Architecture. 6 Concurrency Platforms. 6.1 Introduction. 6.2 Concurrency Platforms. 6.3 Cilk++. 6.4 OpenMP. 6.5 Compute Unifi ed Device Architecture (CUDA). 7 Ad Hoc Techniques for Parallel Algorithms. 7.1 Introduction. 7.2 Defining Algorithm Variables. 7.3 Independent Loop Scheduling. 7.4 Dependent Loops. 7.5 Loop Spreading for Simple Dependent Loops. 7.6 Loop Unrolling. 7.7 Problem Partitioning. 7.8 Divide-and-Conquer (Recursive Partitioning) Strategies. 7.9 Pipelining. 8 Nonserial-Parallel Algorithms. 8.1 Introduction. 8.2 Comparing DAG and DCG Algorithms. 8.3 Parallelizing NSPA Algorithms Represented by a DAG. 8.4 Formal Technique for Analyzing NSPAs. 8.5 Detecting Cycles in the Algorithm. 8.6 Extracting Serial and Parallel Algorithm Performance Parameters. 8.7 Useful Theorems. 8.8 Performance of Serial and Parallel Algorithms on Parallel Computers. 9 z -Transform Analysis. 9.1 Introduction. 9.2 Definition of z- Transform. 9.3 The 1-D FIR Digital Filter Algorithm. 9.4 Software and Hardware Implementations of the z- Transform. 9.5 Design 1: Using Horner's Rule for Broadcast Input and Pipelined Output. 9.6 Design 2: Pipelined Input and Broadcast Output. 9.7 Design 3: Pipelined Input and Output. 10 Dependence Graph Analysis. 10.1 Introduction. 10.2 The 1-D FIR Digital Filter Algorithm. 10.3 The Dependence Graph of an Algorithm. 10.4 Deriving the Dependence Graph for an Algorithm. 10.5 The Scheduling Function for the 1-D FIR Filter. 10.6 Node Projection Operation. 10.7 Nonlinear Projection Operation. 10.8 Software and Hardware Implementations of the DAG Technique. 11 Computational Geometry Analysis. 11.1 Introduction. 11.2 Matrix Multiplication Algorithm. 11.3 The 3-D Dependence Graph and Computation Domain D. 11.4 The Facets and Vertices of D. 11.5 The Dependence Matrices of the Algorithm Variables. 11.6 Nullspace of Dependence Matrix: The Broadcast Subdomain B . 11.7 Design Space Exploration: Choice of Broadcasting versus Pipelining Variables. 11.8 Data Scheduling. 11.9 Projection Operation Using the Linear Projection Operator. 11.10 Effect of Projection Operation on Data. 11.11 The Resulting Multithreaded/Multiprocessor Architecture. 11.12 Summary of Work Done in this Chapter. 12 Case Study: One-Dimensional IIR Digital Filters. 12.1 Introduction. 12.2 The 1-D IIR Digital Filter Algorithm. 12.3 The IIR Filter Dependence Graph. 12.4 z-Domain Analysis of 1-D IIR Digital Filter Algorithm. 13 Case Study: Two- and Three-Dimensional Digital Filters. 13.1 Introduction. 13.2 Line and Frame Wraparound Problems. 13.3 2-D Recursive Filters. 13.4 3-D Digital Filters. 14 Case Study: Multirate Decimators and Interpolators. 14.1 Introduction. 14.2 Decimator Structures. 14.3 Decimator Dependence Graph. 14.4 Decimator Scheduling. 14.5 Decimator DAG for s1 = [1 0]. 14.6 Decimator DAG for s2 = [1 - 1]. 14.7 Decimator DAG for s3 = [1 1]. 14.8 Polyphase Decimator Implementations. 14.9 Interpolator Structures. 14.10 Interpolator Dependence Graph. 14.11 Interpolator Scheduling. 14.12 Interpolator DAG for s1 = [1 0]. 14.13 Interpolator DAG for s2 = [1 - 1]. 14.14 Interpolator DAG for s3 = [1 1]. 14.15 Polyphase Interpolator Implementations. 15 Case Study: Pattern Matching. 15.1 Introduction. 15.2 Expressing the Algorithm as a Regular Iterative Algorithm (RIA). 15.3 Obtaining the Algorithm Dependence Graph. 15.4 Data Scheduling. 15.5 DAG Node Projection. 15.6 DESIGN 1: Design Space Exploration When s = [1 1] t . 15.7 DESIGN 2: Design Space Exploration When s = [1 -1] t . 15.8 DESIGN 3: Design Space Exploration When s = [1 0] t . 16 Case Study: Motion Estimation for Video Compression. 16.1 Introduction. 16.2 FBMAs. 16.3 Data Buffering Requirements. 16.4 Formulation of the FBMA. 16.5 Hierarchical Formulation of Motion Estimation. 16.6 Hardware Design of the Hierarchy Blocks. 17 Case Study: Multiplication over GF(2 m ). 17.1 Introduction. 17.2 The Multiplication Algorithm in GF(2 m ). 17.3 Expressing Field Multiplication as an RIA. 17.4 Field Multiplication Dependence Graph. 17.5 Data Scheduling. 17.6 DAG Node Projection. 17.7 Design 1: Using d1 = [1 0] t . 17.8 Design 2: Using d2 = [1 1] t . 17.9 Design 3: Using d3 = [1 -1] t . 17.10 Applications of Finite Field Multipliers. 18 Case Study: Polynomial Division over GF(2). 18.1 Introduction. 18.2 The Polynomial Division Algorithm. 18.3 The LFSR Dependence Graph. 18.4 Data Scheduling. 18.5 DAG Node Projection. 18.6 Design 1: Design Space Exploration When s1 = [1 -1]. 18.7 Design 2: Design Space Exploration When s2 = [1 0]. 18.8 Design 3: Design Space Exploration When s3 = [1 -0.5]. 18.9 Comparing the Three Designs. 19 The Fast Fourier Transform. 19.1 Introduction. 19.2 Decimation-in-Time FFT. 19.3 Pipeline Radix-2 Decimation-in-Time FFT Processor. 19.4 Decimation-in-Frequency FFT. 19.5 Pipeline Radix-2 Decimation-in-Frequency FFT Processor. 20 Solving Systems of Linear Equations. 20.1 Introduction. 20.2 Special Matrix Structures. 20.3 Forward Substitution (Direct Technique). 20.4 Back Substitution. 20.5 Matrix Triangularization Algorithm. 20.6 Successive over Relaxation (SOR) (Iterative Technique). 20.7 Problems. 21 Solving Partial Differential Equations Using Finite Difference Method. 21.1 Introduction. 21.2 FDM for 1-D Systems. References. Index.


Best Sellers


Product Details
  • ISBN-13: 9780470932025
  • Publisher: John Wiley and Sons Ltd
  • Publisher Imprint: Wiley-blackwell
  • Height: 242 mm
  • No of Pages: 368
  • Series Title: Wiley Parallel and Distributed Computing
  • Weight: 199 gr
  • ISBN-10: 0470932023
  • Publisher Date: 14 Mar 2011
  • Binding: Other digital
  • Language: English
  • Returnable: Y
  • Spine Width: 23 mm
  • Width: 160 mm


Similar Products

How would you rate your experience shopping for books on Bookswagon?

Add Photo
Add Photo

Customer Reviews

REVIEWS           
Click Here To Be The First to Review this Product
Algorithms and Parallel Computing
John Wiley and Sons Ltd -
Algorithms and Parallel Computing
Writing guidlines
We want to publish your review, so please:
  • keep your review on the product. Review's that defame author's character will be rejected.
  • Keep your review focused on the product.
  • Avoid writing about customer service. contact us instead if you have issue requiring immediate attention.
  • Refrain from mentioning competitors or the specific price you paid for the product.
  • Do not include any personally identifiable information, such as full names.

Algorithms and Parallel Computing

Required fields are marked with *

Review Title*
Review
    Add Photo Add up to 6 photos
    Would you recommend this product to a friend?
    Tag this Book
    Read more
    Does your review contain spoilers?
    What type of reader best describes you?
    I agree to the terms & conditions
    You may receive emails regarding this submission. Any emails will include the ability to opt-out of future communications.

    CUSTOMER RATINGS AND REVIEWS AND QUESTIONS AND ANSWERS TERMS OF USE

    These Terms of Use govern your conduct associated with the Customer Ratings and Reviews and/or Questions and Answers service offered by Bookswagon (the "CRR Service").


    By submitting any content to Bookswagon, you guarantee that:
    • You are the sole author and owner of the intellectual property rights in the content;
    • All "moral rights" that you may have in such content have been voluntarily waived by you;
    • All content that you post is accurate;
    • You are at least 13 years old;
    • Use of the content you supply does not violate these Terms of Use and will not cause injury to any person or entity.
    You further agree that you may not submit any content:
    • That is known by you to be false, inaccurate or misleading;
    • That infringes any third party's copyright, patent, trademark, trade secret or other proprietary rights or rights of publicity or privacy;
    • That violates any law, statute, ordinance or regulation (including, but not limited to, those governing, consumer protection, unfair competition, anti-discrimination or false advertising);
    • That is, or may reasonably be considered to be, defamatory, libelous, hateful, racially or religiously biased or offensive, unlawfully threatening or unlawfully harassing to any individual, partnership or corporation;
    • For which you were compensated or granted any consideration by any unapproved third party;
    • That includes any information that references other websites, addresses, email addresses, contact information or phone numbers;
    • That contains any computer viruses, worms or other potentially damaging computer programs or files.
    You agree to indemnify and hold Bookswagon (and its officers, directors, agents, subsidiaries, joint ventures, employees and third-party service providers, including but not limited to Bazaarvoice, Inc.), harmless from all claims, demands, and damages (actual and consequential) of every kind and nature, known and unknown including reasonable attorneys' fees, arising out of a breach of your representations and warranties set forth above, or your violation of any law or the rights of a third party.


    For any content that you submit, you grant Bookswagon a perpetual, irrevocable, royalty-free, transferable right and license to use, copy, modify, delete in its entirety, adapt, publish, translate, create derivative works from and/or sell, transfer, and/or distribute such content and/or incorporate such content into any form, medium or technology throughout the world without compensation to you. Additionally,  Bookswagon may transfer or share any personal information that you submit with its third-party service providers, including but not limited to Bazaarvoice, Inc. in accordance with  Privacy Policy


    All content that you submit may be used at Bookswagon's sole discretion. Bookswagon reserves the right to change, condense, withhold publication, remove or delete any content on Bookswagon's website that Bookswagon deems, in its sole discretion, to violate the content guidelines or any other provision of these Terms of Use.  Bookswagon does not guarantee that you will have any recourse through Bookswagon to edit or delete any content you have submitted. Ratings and written comments are generally posted within two to four business days. However, Bookswagon reserves the right to remove or to refuse to post any submission to the extent authorized by law. You acknowledge that you, not Bookswagon, are responsible for the contents of your submission. None of the content that you submit shall be subject to any obligation of confidence on the part of Bookswagon, its agents, subsidiaries, affiliates, partners or third party service providers (including but not limited to Bazaarvoice, Inc.)and their respective directors, officers and employees.

    Accept

    New Arrivals


    Inspired by your browsing history


    Your review has been submitted!

    You've already reviewed this product!
    ASK VIDYA