What type of mode available in PARSE_DOCUMENT function? (choose two)
OCR
OMR
CONTENT
LAYOUT
PARSE_DOCUMENT supports two processing modes:OCRandLAYOUT. OCR mode performs Optical Character Recognition, extracting raw text from scanned documents, images within PDFs, or low-quality text-based documents. It is ideal for scenarios like contract ingestion, receipt processing, or older scanned documents. LAYOUT mode extracts structured layout elements—tables, paragraphs, lines, bounding boxes—preserving the original document’s spatial organization. This enables downstream analytical tasks such as table reconstruction or semantic segmentation of document content. OMR (Optical Mark Recognition) is not a supported feature, and “CONTENT” is not a valid mode. By supporting OCR and LAYOUT modes, Snowflake Cortex provides robust document intelligence capabilities directly within the Snowflake environment.
What does "warehouse scaling in/out" refer to in Snowflake?
Changing the size of the warehouse (e.g., from Small to Medium or vice versa).
Moving data between different storage locations.
Changing the region of the warehouse.
Adjusting the number of clusters in a multi-cluster warehouse.
Scalingin/outin Snowflake refers to modifying thenumber of compute clustersassociated with a multi-cluster virtual warehouse. Scalingoutincreases cluster count to accommodate higher concurrency or workload spikes, allowing more queries to run simultaneously without queuing. Scalinginreduces cluster count during periods of lower demand, optimizing compute usage and costs. This is distinct fromscaling up/down, which refers to changing warehouse size (e.g., Small, Medium). Scaling does not involve data movement or region changes; warehouse compute is stateless and operates independently of storage. Multi-cluster warehouses allow Snowflake to automatically add or remove clusters based on demand when auto-scale policies are configured.
=======================================
What is the purpose of Time Travel?
To automatically manage timestamp data types
To ensure that users' data can be recovered at any time
To facilitate the loading of historical data into Snowflake
To allow users to access historical data
Time Travel enables Snowflake users to query, clone, or restore historical versions of data. This includes retrieving previous states of tables, schemas, or databases—even after updates, deletes, or drops. Time Travel operates within a retention period (default: 1 day, up to 90 days on higher editions).
Users can query historical data using the AS OF or BEFORE clause, restore dropped objects, and clone databases at specific points in time for backup or analysis.
Time Travel doesnotautomatically manage timestamp data types. It does not guarantee indefinite recovery—after the retention window expires, data moves into Fail-safe. It also is not primarily designed for loading historical datasets; its purpose is to access past states of Snowflake-managed data.
Thus, the correct purpose is to enable access to historical data inside Snowflake.
==================
What is the purpose of the ACCOUNTADMIN role in Snowflake? (Choose any 2 options)
To grant and revoke privileges across the account
To monitor query performance
To create and manage databases
To manage all aspects of the Snowflake account
TheACCOUNTADMINrole is Snowflake’s highest-privileged system-defined role. It provides complete administrative authority across the entire Snowflake account. Its functions include:
Managing global account parameters, replication settings, business continuity, failover groups, and region configurations
Administering billing, resource monitoring, and governance
Granting and revoking privileges across all objects and roles
Overseeing role hierarchy, including SECURITYADMIN and SYSADMIN
It is typically reserved for platform owners and security/governance teams, following least-privilege principles.
“Create and manage databases” is primarily a SYSADMIN responsibility.
“Monitor query performance” can be accomplished by roles with MONITOR privileges; it is not exclusive to ACCOUNTADMIN.
====================================================
How can Python variable substitution be performed in Snowflake notebooks?
By manually editing data files
By configuring network settings
By using placeholders in SQL queries
By writing HTML code
In Snowflake Notebooks, Python and SQL operate in an integrated environment. Python variable substitution is performed usingstring placeholders or f-string interpolationwithin SQL statements. This allows dynamic query construction where Python values are embedded directly into SQL code executed through the Snowpark session.
Example:
table_name = "CUSTOMERS"
session.sql(f"SELECT * FROM {table_name}").show()
This method enables parameterized, context-aware SQL generation—ideal for iterative development, automation, and pipeline construction. It avoids manual text editing and provides consistency across Python and SQL execution layers.
Incorrect responses:
Network settings and HTML are unrelated to variable handling.
Editing data files does not apply to dynamic query parameterization.
Variable substitution is essential for combining logic, iteration, and conditional flows within notebooks.
====================================================
What is the primary benefit of the Snowflake data cloud?
It eliminates the need for data governance.
It provides direct access to underlying infrastructure.
It replaces traditional data warehouses with on-premises solutions.
It enables organizations to unite and share their data.
The Snowflake Data Cloud allows organizations toseamlessly share, access, and collaborate on dataacross departments and external partners, without copying or moving data. Through secure data sharing, listings, and data clean rooms, Snowflake eliminates data silos and dramatically improves data collaboration.
It does not eliminate the need for governance—Snowflake enhances governance via RBAC, masking policies, and centralized controls. It does not provide access to underlying cloud infrastructure; Snowflake abstracts that. It is not an on-premises solution; Snowflake is fully cloud-native.
Thus, the primary benefit is unifying and securely sharing data across the ecosystem.
==================
Which SQL command is commonly used to load structured data from a stage into a Snowflake table?
INSERT INTO
COPY INTO
LOAD DATA
IMPORT DATA
The COPY INTO
|
PDF + Testing Engine
|
|---|
|
$49.5 |
|
Testing Engine
|
|---|
|
$37.5 |
|
PDF (Q&A)
|
|---|
|
$31.5 |
Snowflake Free Exams |
|---|
|
Copyright © 2025 Examstrack. All Rights Reserved
TESTED 05 Dec 2025
