From 23712d26436fd1b56c1a50eecf15272128cc9b35 Mon Sep 17 00:00:00 2001 From: JP Appel Date: Mon, 6 May 2024 18:07:07 -0400 Subject: complex fractals structure --- static/projects/complex-fractals/analysis.html | 5223 ++++++++++++++++++++++++ 1 file changed, 5223 insertions(+) create mode 100644 static/projects/complex-fractals/analysis.html (limited to 'static/projects/complex-fractals/analysis.html') diff --git a/static/projects/complex-fractals/analysis.html b/static/projects/complex-fractals/analysis.html new file mode 100644 index 0000000..b9cd813 --- /dev/null +++ b/static/projects/complex-fractals/analysis.html @@ -0,0 +1,5223 @@ + + + + + + + + + + + + + + +HPC Final Project Analysis + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + + + + + +
+

Hypotheses

+
    +
  1. Computing complex fractals parallelizes very well.
  2. +
  3. The runtime has a linear relation with the number of samples.
  4. +
+
+
+

Data

+

Each data point is an average of 5 runtimes collected from running +the program. To recreate out data, compile the programs using the +instructions in README.md then run gather_data +in the analysis directory from the project root. After the +the SLURM jobs are finished, run analysis/collate_data to +collect the data into a single file analysis/data.csv. Note +that the SLURM scripts are unique to MU Cluster and will need to be +tweaked to work on other systems.

+

For a raw csv view of the data see data.csv. We +collected data for the following fractals:

+
    +
  • Mandelbrot
  • +
  • Burning Ship
  • +
  • Tricorn
  • +
  • Multibrot
  • +
  • Multicorn
  • +
  • Julia
  • +
+

Using a maximum iterations of 25. We collect 3 data points per serial +experiment, and 5 data points per shared and CUDA experiment. For a more +detailed view of our collection methods see the scripts in +analysis/scripts. The runtimes for CUDA experiments include +the time to copy the data to and from the GPU for reasons that will +become apparent later. Note the block size is set to \(16x16\) for all CUDA experiments since we +found a maxium percentage difference of \(0.2%\) between \(16x16\) and other block sizes that yield +the maximum number of warps.

+
+ +
+
+
+

Runtimes

+
+

Sampling’s Effect on Runtimes

+

The following plots are interactive, allowing you to zoom in on the +data. By clicking on the legend you can hide or show data for each +fractal. Each plot has a line representing the median runtime for each +fractal and program.

+
+

Serial, Shared, CUDA

+

For larger sample sizes we see the GPU with memory copy overhead is +far faster than the CPU implementations. Without the memory copy +overhead the CUDA runtimes are barely measurable. We expect this result, +since each value in the fractal is computed independently, which is the +type of task the GPU is designed for.

+

Observe that the fractals that require complex exponentiation, such +as the Multibrot and Multicorn, have the longest runtimes. We expect +this, since the complex exponentiation is the most computationally +expensive part of the fractal generation.

+

Also take note of how linear the runtimes are with respect to the +number of samples. We explore this further in the linear models +section.

+
full_plot <- full_data %>%
+  ggplot(aes(x = samples, y = runtime, color = program,
+             shape = factor(threads))) +
+  geom_point() +
+  facet_wrap(~fractal, scales = "free_y") +
+  stat_summary(fun = median, geom = "line",
+               aes(group = interaction(program, threads))) +
+  labs(title = "Sampling's Effect on Runtimes",
+       x = "Samples",
+       y = "Runtime (s)",
+       shape = "Threads",
+       color = "Implementation")
+
+ggplotly(full_plot)
+
+ +
+
+

Serial

+
serial_plot <- full_data %>%
+  filter(program == "serial") %>%
+  ggplot(aes(x = samples, y = runtime, color = fractal)) +
+  geom_point() +
+  stat_summary(fun = median, geom = "line",
+               aes(group = fractal)) +
+  labs(title = "Serial Sampling's Effect on Runtimes",
+       x = "Samples",
+       y = "Runtime (s)",
+       color = "Fractal")
+
+ggplotly(serial_plot)
+
+ +
+
+

Shared

+
shared_plot <- full_data %>%
+  filter(program == "shared") %>%
+  rename(Threads = threads) %>%
+  ggplot(aes(x = samples, y = runtime, color = fractal)) +
+  geom_point() +
+  stat_summary(fun = median, geom = "line",
+               aes(group = fractal)) +
+  facet_wrap(~Threads, scales = "free_y", labeller = label_both) +
+  labs(title = "Shared Sampling's Effect on Runtimes",
+       x = "Samples",
+       y = "Runtime (s)",
+       color = "Fractal")
+
+ggplotly(shared_plot)
+
+ +
+
+

CUDA

+
cuda_plot <- full_data %>%
+  filter(program == "cuda") %>%
+  ggplot(aes(x = samples, y = runtime, color = fractal)) +
+  geom_point() +
+  stat_summary(fun = median, geom = "line",
+               aes(group = fractal)) +
+  labs(title = "CUDA Sampling's Effect on Runtimes",
+       x = "Samples",
+       y = "Runtime (s)",
+       color = "Fractal")
+
+ggplotly(cuda_plot)
+
+ +
+
+
+

Linear Models

+

We fit linear models to the data to test our hypothesis that the +runtime has a linear relationship with the number of samples. By doing +so we can give with a high confidence models for computing the runtime +of a fractal given the number of samples. In all the models below, the +\(R^2\) value is very close to 1, +indicating that the model fits the data very well.

+
+

Serial

+
+ + + + + + tinytable_snlu9hfu73163zkoxfkn + + + + + + + +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
MandelbrotBurning ShipTricornMultibrotMulticornJulia
(Intercept)0.010 0.016 0.007 0.069 0.040 0.014
(0.003)(0.005)(0.002)(0.027)(0.040)(0.005)
samples 0.000 0.000 0.000 0.000 0.000 0.000
(0.000)(0.000)(0.000)(0.000)(0.000)(0.000)
Num.Obs. 96 96 96 96 96 96
R2 1.000 1.000 1.000 1.000 1.000 1.000
R2 Adj. 1.000 1.000 1.000 1.000 1.000 1.000
AIC −408.1 −339.1 −524.1 −1.6 74.8 −340.5
BIC −400.5 −331.4 −516.4 6.0 82.5 −332.8
Log.Lik. 207.072172.560265.0313.824 −34.384173.257
RMSE 0.03 0.04 0.02 0.23 0.35 0.04
+
+ + + + + + +
+
+
+

Shared

+

For the sake of brevity we only show models that include all threads. +Separating the models by thread counts provided much better models.

+
+ + + + + + tinytable_ms48x9a1vnhrqj526xmp + + + + + + + +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
MandelbrotBurning ShipTricornMultibrotMulticornJulia
(Intercept)−0.002 −0.004 −0.002 0.013 0.008 0.005
(0.041) (0.051) (0.032) (0.400) (0.354) (0.050)
samples 0.000 0.000 0.000 0.000 0.000 0.000
(0.000) (0.000) (0.000) (0.000) (0.000) (0.000)
Num.Obs. 640 640 640 640 640 640
R2 0.907 0.888 0.922 0.798 0.798 0.879
R2 Adj. 0.907 0.888 0.922 0.798 0.798 0.879
AIC 1723.4 2004.2 1425.0 4637.4 4480.7 1983.5
BIC 1736.8 2017.6 1438.4 4650.8 4494.1 1996.9
Log.Lik. −858.689−999.083−709.490−2315.722−2237.340−988.764
RMSE 0.93 1.15 0.73 9.02 7.98 1.13
+
+ + + + + + +
+
+
+

CUDA

+
+ + + + + + tinytable_oxrr145boqaihgb2ihz8 + + + + + + + +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
MandelbrotBurning ShipTricornMultibrotMulticornJulia
(Intercept)0.182 0.182 0.182 0.184 0.185 0.182
(0.000)(0.000)(0.000)(0.000)(0.000)(0.000)
samples 0.000 0.000 0.000 0.000 0.000 0.000
(0.000)(0.000)(0.000)(0.000)(0.000)(0.000)
Num.Obs. 160 160 160 160 160 160
R2 1.000 0.999 1.000 1.000 1.000 1.000
R2 Adj. 1.000 0.999 1.000 1.000 1.000 1.000
AIC −1765.1−1668.1−1786.7−1370.7−1348.4−1764.1
BIC −1755.9−1658.8−1777.4−1361.5−1339.2−1754.9
Log.Lik. 885.550837.037896.334688.363677.197885.047
RMSE 0.00 0.00 0.00 0.00 0.00 0.00
+
+ + + + + + +
+
+
+
+
+

Parallelism

+

In the plots below we show the percentage of parallelism according to +Amdahls Law as a function of samples. We truncated the data to only show +percentage parallelism greater than -75%. As we expected, the CUDA +implementation is highly parallel, with the shared implementations have +varying degrees of parallelism. For each thread count, the shared +implementations tend to level off at a certain percentage of +parallelism. An oddity in the data is that the shared multibrot and +multicorn implementations have a negative percentage of parallelism. We +believe this is from their usage of complex exponentiation, which is +their ownly major difference between the other fractals. From this, we +conclude that that there is some resource that each thread must share in +multibrot and multicorn that is not shared in the other fractals. Most +likely this resource is a specialized arrithmetic unit.

+

On the GPU we have a similar issue, the register count per thread +from about 32 to 48 when computing the multibrot and multicorn +fractals.

+

(Thanks to Yousuf for improving the legibility of these graphs)

+
parallel_plot <- parallel %>%
+  ggplot(aes(x = samples, y = percentage_parallel,
+             color = interaction(program,threads))) +
+  geom_point() +
+  stat_summary(fun = median, geom = "line",
+               aes(group = interaction(program, fractal, threads))) +
+  facet_wrap(~fractal, scales = "free_y") +
+  labs(title = "Program/Fractal Parallelism",
+       x = "Samples",
+       y = "Percentage Parallel",
+       color = "Implementation/Threads")
+
+ggplotly(parallel_plot)
+
+ +
+
+

Conclusions

+

From this data we can conclude that the runtime of fractal generation +is linear with respect to the number of samples. We also conclude that +the CUDA implementation is highly parallel, with the shared +implementations having varying degrees of parallelism.

+
+ + + + +
+ + + + + + + + + + + + + + + + -- cgit v1.2.3