Mercurial > lilug > scm
changeset 0:8e515e141b70
Initial revision (based on the clustering talk slides)
author | Josef "Jeff" Sipek <jeffpc@optonline.net> |
---|---|
date | Fri, 26 Aug 2005 17:17:22 -0500 |
parents | |
children | d7c6a14e17c8 |
files | Makefile slideshow.tex |
diffstat | 2 files changed, 216 insertions(+), 0 deletions(-) [+] |
line wrap: on
line diff
--- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/Makefile Fri Aug 26 17:17:22 2005 -0500 @@ -0,0 +1,2 @@ +all: + latex slideshow.tex && dvipdf slideshow.dvi
--- /dev/null Thu Jan 01 00:00:00 1970 +0000 +++ b/slideshow.tex Fri Aug 26 17:17:22 2005 -0500 @@ -0,0 +1,214 @@ +\documentclass[pdf,contemporain,slideColor,colorBG,accumulate,nototal]{prosper} + +%\usepackage{macros-cp} + +\title{Linux Clustering} +\subtitle{\normalsize + {\bf Q}: What is better than a PC running Linux?\\ + {\bf A}: More than one PC running Linux, clustered together!} +\author{Josef ``Jeff'' Sipek} +\institution{} + +\begin{document} +\maketitle + +% What the..? +\overlays{5}{ +\begin{slide}{Wait a second...} +% What is this "cluster"? +What are you talking about?! + +\onlySlide*{2}{ +\vspace{3em} +% Collection of similar items +Collection +} +\onlySlide*{3}{ +\vspace{3em} +% We are talking about computers +Collection of computers +} +\onlySlide*{4}{ +\vspace{3em} +% In a cluster, each computer is called a node +Collection of nodes +} +\onlySlide*{5}{ +\vspace{3em} +% We like Linux +Collection of nodes running Linux +} +% hence the subtitle +\end{slide}} + +% Types +\overlays{5}{ +\begin{slide}{Types of Clustering} +\small + +\begin{raggedright} +There are several types of clusters... + +\fromSlide{2}{ + \vspace{2em} +% google's cluster is great example + {\bf High availability} - fail over situation +} + +\fromSlide{3}{ + \vspace{2em} +% grid computing, openmosix, etc. + {\bf High throughput} - implies high availability, loosly coupled +} + +\fromSlide{4}{ + \vspace{2em} +% beowulf +% - dedicated lan + {\bf High performance} - tightly coupled, very faaaaast +} + +\fromSlide{5}{ + \hspace{3em}{\bf This is the cool stuff!} +} +\end{raggedright} +\end{slide}} + +% NUMA +\overlays{3}{ +\begin{slide}{NonUniform Memory Access} +% Altix 3000 uses special hardware to connect the nodes into one "computer." +If you have quite a bit of money to spend, you might want to consider one of these with: +\begin{itemstep} +% yes, that's GigaBytes +\item 6.4 GB/s interconnect +% quite a bit of memory +\item 4GB - 8TB of RAM +% what good is hardware, when there is no software? +\item Linux +% you need at least 4 zeros in the pricetag, more likely to have 5 +\item Damn expensive! +\end{itemstep} +\end{slide}} + +% Beowulf +\overlays{3}{ +\begin{slide}{Beowulf} +%\epsfig{file=./beowulf.eps} +% Are you too poor to spend quarter million on a computer? Don't worry! +If you don't have that much money, try Beowulf cluster type. +\begin{itemstep} +% you can use hardware you have already at home! (point out the NYC court computers) +\item Use any old hardware +% Ok, maybe you think that you should spend some money +\item Use any new hardware +% And here we go again, need software? Get Linux! +\item Linux +\end{itemstep} +\end{slide}} + +% MPI +\overlays{4}{ +\begin{slide}{Message Passing Interface aka. MPI} +\begin{itemstep} +% MPI is an standard +\item A standard +% MPI was designed for high performance on both massively parallel machines and on workstation clusters. +\item Designed for high performance +% MPI is widely available, with both free available and vendor-supplied implementations. A number of MPI home pages are available. +\item Widely available +% for what it's worth, MPI has 130 functions +\item 130 function +\end{itemstep} +\end{slide}} + +% PVM +\overlays{1}{ +\begin{slide}{Parallel Virtual Machine aka. PVM} +% I won't have enough time to talk and demo PVM...oh well +\begin{itemstep} +% PVM does a similar thing as MPI, but a little differently +% I don't have much to say about it except that it has 38 functions, and works well for many people. RTFM +\item 38 functions +\end{itemstep} +\end{slide}} + +% OM +\overlays{3}{ +\begin{slide}{OpenMosix} +% openMosix is a kernel extension for SSI clustering. +\begin{itemstep} +\item Single System Image clustering +% spawn a process anywhere, and it will migrate to the best node available => load balancing +\item Automatic process migration +% +\item Easy to set up +% not in my opionion +\end{itemstep} +\end{slide}} + +% distcc + +% HW: uncluterring of cables +\overlays{3}{ +\begin{slide}{What a mess!} +% Now that you have spend thousands of dollars on computers and miles of cables, what can you do to make it neater? +What am I supposed to do? +\begin{itemstep} +% you won't like this, but it works +\item Don't look +% organize the cables, have networking cables run in "channels" +\item Organize +% you might want to write everything down, since a cluster can get large and you usually set it up and leave it alone for a looong time +\item Document +\end{itemstep} +\end{slide}} + +% RTFM +\begin{slide}{RTFM} +There are MANY HowTos, and manuals about clustering...Read! +\end{slide} + +% Let's get dirty +\overlays{8}{ +\begin{slide}{Let's get dirty..} +\begin{itemstep} +% Debian is a good choice, this is very simple +\item Install Linux +% really any DNS server will do, but bind is know to be good +\item Install bind9 +% DHCP gives the disk-less-nodes IPs and info on how to boot +\item Install dhcpd +% tftp gives the nodes a way to get a copy of the kernel +\item Install tftpd +% mknbi (debian) allows you to create the etherboot kernel images +\item Install mknbi +% NFS for root on NFS +\item Install NFS +% you need to create a root structure for each node +\item Make / for nodes +% if you don't have a network bootable box, make an etherboot floppy/cd +\item Make boot floppy (optional) +\end{itemstep} +\end{slide}} + +\begin{slide}{Installing Linux} +\vspace{3.5em} +\begin{center} +Duh! +\end{center} +\end{slide} + +\overlays{2}{ +\begin{slide}{Installing Bind9, dhcpd, etc...} +This takes some effort, but is outside the scope of this talk, except... +\begin{itemstep} +\item Set nfs-root in dhcpd.conf +\item +\end{itemstep} +\end{slide}} + +% FIXME: use "clusterfobia" + + +\end{document}