University of California, Davis

Category: niche modeling

Modeling the distribution of sasquatch – the first published study using ENMTools

Lozier, Aniello, and Hickerson just published a paper in the Journal of Biogeography in which they use sasquatch sightings and footprints to model the distribution of this elusive imaginary species. They went one step further and modeled the effects of climate change on sasquatch distributions, showing that our furry friends are only going to become more elusive with time. Finally, they used ENMTools to demonstrate that sasquatch distributions were statistically indistinguishable from those of the black bear, suggesting that many of the bigfoot sightings may have been a case of mistaken identity.

Just to put a punchline on the whole thing, the public response to the New Scientist article about the study has led to a rush of public comments claiming that the study is biased due to the a priori assumption that sasquatch isn’t real.

The ENMTools web site is getting ready for launch

For anyone out there who read our Evolution paper last year, Rich Glor, Michael Turelli, and I are putting together a web site to host the software we made for that study. It’s got a bunch of other little bits and bobs in development as well, mostly revolving around different resampling procedures for use with environmental niche modeling. You can find it at www.enmtools.com. I’ll post any major developments here as well.

Yes, the site is supposed to look that way.

Datamuncher – a handy tool for niche and distribution modelers

Here’s a little tool I whipped together for my own use. I hope it’s useful to others as well. Basically what it does is take .csv files of species occurrences and a batch of ASCII files, and converts them into three output files. They are:

  1. A community presence/absence matrix, with “community” defined as a grid cell in the ASCII raster files.
  2. A set of coordinates for each grid cell that has at least one species present in it, corresponding to the “communities” above.
  3. A matrix of values from the ASCII raster files for each community.

The general idea is to take data in a format that is acceptable for Maxent (.csv and .asc) and convert it into a set of files that are usable with some of the R algorithms. Currently it seems to be working with GDM, but I haven’t tried anything else. You may need to delete columns 1-3 in the environment file, depending on what you’re doing with it.

Here’s the quick and dirty of it:

Where it says “occurrence files”, you just hit the “add files” button and drop in all of the .csv files you want to use. Note that everything in that box is going to be thrown into ONE set of output files!

Second, you add your environmental layers. ASCII raster format is the only one it understands.

Third, pick an output directory.

Finally, name your analysis. If your analysis is named “my_stuff”, the output files will be “my_stuff_communities.csv” (1 above), “my_stuff_community_coordinates.csv” (2), “my_stuff_environment.csv” (3), all dropped into the output directory you chose.

In addition to its original intent, this tool may also be useful to those who want a quick and dirty way to get their data extracted for a MANOVA or other analysis – if you feed it occurrence files for one species at a time, the environment file will contain all conditions occupied by that species. Neat!

You can download it as a Windows executable file here, or as a Perl script>here. Be aware that the Perl script requires that Tk be installed (NOT Tk+!), which you can do from the package manager. Also be aware that it probably won’t work on a Mac because of the way it’s parsing directories. If anyone wants a Mac version, please feel free to email me.

Also let me know if you hit any snags. Testing has been rather limited so far, as I just finished it today. Use at your own risk!

© 2024 Wainwright Lab

Theme by Anders NorenUp ↑