Loading...

High-Performance CSV
for SQL Server

The CSV library built for database workflows. Native IDataReader for SqlBulkCopy, built-in compression, progress reporting, and real-world data handling. From the trusted dbatools project.

Get Started View on GitHub
dotnet add package Dataplat.Dbatools.Csv
6x
Faster for SqlBulkCopy
IDataReader
Native SqlBulkCopy
5
Compression formats

Built for Database Workflows

Everything you need to import CSV data into SQL Server and other databases

Streaming IDataReader

Works seamlessly with SqlBulkCopy and other ADO.NET consumers. Stream millions of rows with minimal memory usage.

Parallel Processing

Optional multi-threaded parsing for large files. Process multiple batches concurrently for 2-4x speedup.

Compression Support

Automatic handling of GZip, Deflate, Brotli (.NET 8+), and ZLib (.NET 8+) with decompression bomb protection.

String Interning

Reduce memory pressure for files with repeated values. ArrayPool-based memory management for minimal allocations.

Culture-Aware Parsing

Configurable type converters for dates, numbers, booleans, and GUIDs across different cultures and formats.

Robust Error Handling

Collect errors, throw on first error, or skip bad rows. Handle duplicate headers and field count mismatches gracefully.

Progress & Cancellation

Monitor import progress with callbacks showing rows/second, percent complete, and elapsed time. Cancel long-running imports with CancellationToken.

Database-First Design

Built specifically for database import workflows. Culture-aware parsing, null vs empty handling, and direct integration with ADO.NET.

Simple, Powerful API

Get started in seconds with an intuitive API designed for real-world scenarios

CsvExample.cs
using Dataplat.Dbatools.Csv.Reader;

// Simple usage - just pass the file path
using var reader = new CsvDataReader("data.csv");

while (reader.Read())
{
    var name = reader.GetString(0);
    var value = reader.GetInt32(1);
    Console.WriteLine($"{name}: {value}");
}

// With options for custom delimiters
var options = new CsvReaderOptions
{
    Delimiter = ";",
    HasHeaderRow = true,
    Culture = CultureInfo.GetCultureInfo("de-DE")
};
using var reader2 = new CsvDataReader("data.csv", options);
BulkImport.cs
using Dataplat.Dbatools.Csv.Reader;
using Microsoft.Data.SqlClient;

// Stream CSV directly to SQL Server - minimal memory usage
using var reader = new CsvDataReader("large-data.csv");
using var connection = new SqlConnection(connectionString);
connection.Open();

using var bulkCopy = new SqlBulkCopy(connection)
{
    DestinationTableName = "MyTable",
    BatchSize = 10000,
    BulkCopyTimeout = 0
};

// Streams directly from CSV to database
bulkCopy.WriteToServer(reader);

Console.WriteLine($"Imported {reader.CurrentRecordIndex} rows");
Compressed.cs
using Dataplat.Dbatools.Csv.Reader;

// Automatically detects compression from extension
// Supports: .gz, .gzip, .deflate, .br, .zlib
using var reader = new CsvDataReader("data.csv.gz");

while (reader.Read())
{
    // Process decompressed data normally
}

// Or specify explicitly with security limits
var options = new CsvReaderOptions
{
    CompressionType = CompressionType.GZip,
    MaxDecompressedSize = 100 * 1024 * 1024  // 100MB limit
};
using var reader2 = new CsvDataReader(stream, options);
Parallel.cs
using Dataplat.Dbatools.Csv.Reader;

// Enable parallel processing for large files
var options = new CsvReaderOptions
{
    EnableParallelProcessing = true,
    MaxDegreeOfParallelism = Environment.ProcessorCount
};

using var reader = new CsvDataReader("large-file.csv", options);

// Process as normal - parallel parsing happens automatically
while (reader.Read())
{
    // GetValue/GetValues are thread-safe in parallel mode
    var values = new object[reader.FieldCount];
    reader.GetValues(values);
    ProcessRecord(values);
}
ErrorHandling.cs
using Dataplat.Dbatools.Csv.Reader;

// Collect errors instead of throwing
var options = new CsvReaderOptions
{
    CollectParseErrors = true,
    MaxParseErrors = 100,
    ParseErrorAction = CsvParseErrorAction.AdvanceToNextLine,

    // Handle malformed data gracefully
    DuplicateHeaderBehavior = DuplicateHeaderBehavior.Rename,
    MismatchedFieldAction = MismatchedFieldAction.PadOrTruncate,
    QuoteMode = QuoteMode.Lenient
};

using var reader = new CsvDataReader("messy-data.csv", options);

while (reader.Read())
{
    // Process valid records
}

// Review collected errors
foreach (var error in reader.ParseErrors)
{
    Console.WriteLine($"Row {error.RowIndex}: {error.Message}");
}
PowerShell
# ─────────────────────────────────────────────────────────────
# Option 1: Use with dbatools (recommended for SQL Server work)
# ─────────────────────────────────────────────────────────────
Install-Module dbatools
Import-DbaCsv -Path "data.csv" -SqlInstance sql01 -Database tempdb -AutoCreateTable

# ─────────────────────────────────────────────────────────────
# Option 2: Use the library directly (standalone)
# ─────────────────────────────────────────────────────────────
Install-Module dbatools.library
Import-Module dbatools.library

# Create reader and read CSV
$reader = [Dataplat.Dbatools.Csv.Reader.CsvDataReader]::new("data.csv")

while ($reader.Read()) {
    $name  = $reader.GetString(0)
    $value = $reader.GetInt32(1)
    Write-Output "$name: $value"
}
$reader.Dispose()

# With options for custom delimiters, compression, etc.
$options = [Dataplat.Dbatools.Csv.Reader.CsvReaderOptions]::new()
$options.Delimiter = "::"
$options.HasHeaderRow = $true

$reader = [Dataplat.Dbatools.Csv.Reader.CsvDataReader]::new("data.csv", $options)

Benchmark Results

100,000 rows × 10 columns (.NET 8, AVX-512)

Single Column Read (typical SqlBulkCopy/IDataReader pattern)

Library Time (ms) vs Dataplat
Sep 18 ms 3.7x faster
Sylvan 27 ms 2.5x faster
Dataplat 67 ms baseline
CsvHelper 76 ms 1.1x slower
LumenWorks 395 ms 5.9x slower

All Columns Read (full row processing)

Library Time (ms) vs Dataplat
Sep 30 ms 1.8x faster
Sylvan 35 ms 1.6x faster
Dataplat 55 ms baseline
CsvHelper 97 ms 1.8x slower
LumenWorks 102 ms 1.9x slower

Sep and Sylvan are faster for raw parsing. Dataplat wins for complete database workflows: IDataReader + compression + progress + messy data handling.

Why the gap? Sep/Sylvan use Span<T> and defer string allocation. The IDataReader interface requires returning actual objects—a fundamental tradeoff for SqlBulkCopy compatibility.

Why Choose Dbatools.Csv?

The right tool for database import workflows

Raw Speed Libraries

  • Sep: Fastest raw parsing (21 GB/s)
  • Sylvan: Very fast, IDataReader
  • No built-in compression
  • Minimal malformed data handling
  • No progress reporting
  • Requires more configuration

Dataplat.Dbatools.Csv

  • Native IDataReader + SqlBulkCopy
  • Built-in GZip, Brotli, ZLib
  • Lenient parsing for messy data
  • Progress reporting & cancellation
  • Culture-aware type conversion
  • dbatools integration
  • 6x faster than LumenWorks (SqlBulkCopy)

Ready to get started?

Install Dataplat.Dbatools.Csv and start processing CSV files faster today.