![]() python statistics python3 sampling latin-hypercube latin-hypercube-sampling Updated HTML relf / egobox Star 41. library(lhs)įor convenience we can bring this into a single function like so. This is an implementation of Deutsch and Deutsch, 'Latin hypercube sampling with multidimensional uniformity', Journal of Statistical Planning and Inference 142 (2012), 763-772. ![]() In addition to providing a cost-effective and reliable sampling scheme, the Latin hypercube sampling technique also provides the user with the flexibility efficiently to study. Latin hypercube sampling (LHS) was inspired by the concept of Latin square from combinatorial mathematics, where an n-by-n matrix is filled with n different objects (i.e., numbers, characters, symbols, etc.) such that each object occurs exactly once in each row and exactly once in each column see Fig. Using the lhs library from R, we first create 12 random uniform distributions between 0 and 1 for each of the three decisions, a, b and c. Latin hypercube sampling is a recently developed sampling technique for generating input vectors into computer models for purposes of sensitivity analysis studies. Let’s assume we would like only 12 individuals from the 24 potential unique combinations, but we still want good representation of all/most possible decisions. An individual in the starting population of a genetic algorithm would be of the form, 1_TRUE_red, for instance. There are \(3 * 2 * 4 = 24\) possible unique combinations of these decisions (if a can only take on integer values). decision c can be red, green, blue or black.Let’s assume we have three decisions, where: The last section, VBA Reference - Public Functions, lists the public functions available in VBA. The next section, VBA Reference - Public Variables, lists the public variables available in VBA. I needed a way of randomly sampling from the decisions but in a way that ensured the starting population had lots of diversity. EXCEL VBA LATIN HYPERCUBE SAMPLING HOW TO Two tables give a summary listing of available variables and functions. Combinatorial algorithms are the worst type for growth in complexity ( \(O(n!)\)). Already with 30 independent decisions with two possible values the number of combinations is 2^30 = 10,737,41,824, 10 times more than the number of stars in the average galaxy. This is true even if you only select the upper and lower bound per decision. I then wrote another function to take a model object and a user-defined population size and automatically work out the nearest population count that allowed for even sampling over the model’s decision bounds.įor models with large numbers of decisions the potential number of combinations is enormous. Initially I wrote one function to create evenly spaced sequences for every decision between the lower and upper bound of allowable values, from which I enumerated all possible combinations using id(). The ga() function within the GA package in R allows for a suggestions argument, which takes a matrix of decision values and places them within the starting population. Having a good distribution of “genes”, or decisions within the initial population is key to allowing a GA to effectively explore the decision space. The background to this is that I have been working on improving the convergence of an optimisation package that uses genetic algorithms.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |