Rajan Khatiwoda (Śākta database)
Silje Lyngar Einarsen (Śākta database)
Siddharth Chhabra (Bengali database)
Michael Elison (Śākta database)
Prema Goet (Śākta and Geldblum Collection)
This project explores the potential for using computational methods in combination with traditional scholarly analysis in Hindu Studies. Compared to traditional workflows in which scholars manually collate, compare and critically edit manuscripts into edited volumes, new computing tools hold substantial promise. For example, many time-consuming tasks may now be automated, and new understandings and insights based on the analysis of large amounts of data can be obtained that would previously have been impossible.
With new coding tools such as R or Python it is now possible to read entire corpora of texts into a computer and readily obtain answers to questions such as: How does the frequency of specific words differ between texts? Which words commonly co-occur together? Which texts are unusual or interesting on some criteria, such as occurrence of specific words, phrases, length of verses, and so on? Answering these sort of questions based on the analysis of large corpora of texts spanning a longue durée opens up entirely new possibilities and fields of investigation (e.g. in relation to discourse analysis, conceptual history and statistics). In this way, automated methods can give scholars new ways to efficiently understand and interact with large bodies of texts, which may then be combined with more traditional, in-depth manual analysis and interpretation.
New digital tools also allow scholars to easily share data and analyses in test scripts and open-access databases, or to build online visualisation tools that allow others to interact with digitised content in new ways.