Today, data has become the primary currency of organizations, used for analytics, AI/ML and enabling better data-driven decisions. But all this data creates a lot of complexity for data professionals, which in turn slows time to insight and leaves much of the data’s value unrealized.

“Today, the storage of objects in the cloud is handled in such a way that the data is passed to an engine such as Spark, where you can run either Java code or Python code. And different users will be running it in different clusters, different environments that are managed, secured, and so on. And so there’s all the infrastructure management that goes into that,” said Julian Forrero, senior product marketing manager at Snowflake, in SD Times Live! an event.

According to Forero, this creates processing, complexity and power management challenges, and creates many silos as data is copied to different environments.

To overcome this, Snowflake built Snowpark, which allows data professionals to use the programming language of their choice, collaborate on the same platform, and use the same data. At the same time, they will still receive the benefits of simplicity, access, performance, scalability, management and security.

Based on Snowpark materials documentationhas a number of features that distinguish it from other client libraries, including constructs for creating SQL statements, deferred execution of server-side operations, and the ability to create user-defined functions.

It can be used with Python, Java and Scala languages. Examples of uses include processing semi-structured and unstructured data or providing business users with access to data science.

To demonstrate Snowpark, Caleb Bechtold, CTO and Data Science Architect at Snowflake, gave the SD Times viewer product demonstration during the free webinar. To learn more about how it works, watch the video.