The CSV library built for database workflows. Native IDataReader for SqlBulkCopy, built-in compression, progress reporting, and real-world data handling. From the trusted dbatools project.
dotnet add package Dataplat.Dbatools.Csv
Everything you need to import CSV data into SQL Server and other databases
Works seamlessly with SqlBulkCopy and other ADO.NET consumers. Stream millions of rows with minimal memory usage.
Optional multi-threaded parsing for large files. Process multiple batches concurrently for 2-4x speedup.
Automatic handling of GZip, Deflate, Brotli (.NET 8+), and ZLib (.NET 8+) with decompression bomb protection.
Reduce memory pressure for files with repeated values. ArrayPool-based memory management for minimal allocations.
Configurable type converters for dates, numbers, booleans, and GUIDs across different cultures and formats.
Collect errors, throw on first error, or skip bad rows. Handle duplicate headers and field count mismatches gracefully.
Monitor import progress with callbacks showing rows/second, percent complete, and elapsed time. Cancel long-running imports with CancellationToken.
Built specifically for database import workflows. Culture-aware parsing, null vs empty handling, and direct integration with ADO.NET.
Get started in seconds with an intuitive API designed for real-world scenarios
using Dataplat.Dbatools.Csv.Reader;
// Simple usage - just pass the file path
using var reader = new CsvDataReader("data.csv");
while (reader.Read())
{
var name = reader.GetString(0);
var value = reader.GetInt32(1);
Console.WriteLine($"{name}: {value}");
}
// With options for custom delimiters
var options = new CsvReaderOptions
{
Delimiter = ";",
HasHeaderRow = true,
Culture = CultureInfo.GetCultureInfo("de-DE")
};
using var reader2 = new CsvDataReader("data.csv", options);
using Dataplat.Dbatools.Csv.Reader;
using Microsoft.Data.SqlClient;
// Stream CSV directly to SQL Server - minimal memory usage
using var reader = new CsvDataReader("large-data.csv");
using var connection = new SqlConnection(connectionString);
connection.Open();
using var bulkCopy = new SqlBulkCopy(connection)
{
DestinationTableName = "MyTable",
BatchSize = 10000,
BulkCopyTimeout = 0
};
// Streams directly from CSV to database
bulkCopy.WriteToServer(reader);
Console.WriteLine($"Imported {reader.CurrentRecordIndex} rows");
using Dataplat.Dbatools.Csv.Reader;
// Automatically detects compression from extension
// Supports: .gz, .gzip, .deflate, .br, .zlib
using var reader = new CsvDataReader("data.csv.gz");
while (reader.Read())
{
// Process decompressed data normally
}
// Or specify explicitly with security limits
var options = new CsvReaderOptions
{
CompressionType = CompressionType.GZip,
MaxDecompressedSize = 100 * 1024 * 1024 // 100MB limit
};
using var reader2 = new CsvDataReader(stream, options);
using Dataplat.Dbatools.Csv.Reader;
// Enable parallel processing for large files
var options = new CsvReaderOptions
{
EnableParallelProcessing = true,
MaxDegreeOfParallelism = Environment.ProcessorCount
};
using var reader = new CsvDataReader("large-file.csv", options);
// Process as normal - parallel parsing happens automatically
while (reader.Read())
{
// GetValue/GetValues are thread-safe in parallel mode
var values = new object[reader.FieldCount];
reader.GetValues(values);
ProcessRecord(values);
}
using Dataplat.Dbatools.Csv.Reader;
// Collect errors instead of throwing
var options = new CsvReaderOptions
{
CollectParseErrors = true,
MaxParseErrors = 100,
ParseErrorAction = CsvParseErrorAction.AdvanceToNextLine,
// Handle malformed data gracefully
DuplicateHeaderBehavior = DuplicateHeaderBehavior.Rename,
MismatchedFieldAction = MismatchedFieldAction.PadOrTruncate,
QuoteMode = QuoteMode.Lenient
};
using var reader = new CsvDataReader("messy-data.csv", options);
while (reader.Read())
{
// Process valid records
}
// Review collected errors
foreach (var error in reader.ParseErrors)
{
Console.WriteLine($"Row {error.RowIndex}: {error.Message}");
}
# ─────────────────────────────────────────────────────────────
# Option 1: Use with dbatools (recommended for SQL Server work)
# ─────────────────────────────────────────────────────────────
Install-Module dbatools
Import-DbaCsv -Path "data.csv" -SqlInstance sql01 -Database tempdb -AutoCreateTable
# ─────────────────────────────────────────────────────────────
# Option 2: Use the library directly (standalone)
# ─────────────────────────────────────────────────────────────
Install-Module dbatools.library
Import-Module dbatools.library
# Create reader and read CSV
$reader = [Dataplat.Dbatools.Csv.Reader.CsvDataReader]::new("data.csv")
while ($reader.Read()) {
$name = $reader.GetString(0)
$value = $reader.GetInt32(1)
Write-Output "$name: $value"
}
$reader.Dispose()
# With options for custom delimiters, compression, etc.
$options = [Dataplat.Dbatools.Csv.Reader.CsvReaderOptions]::new()
$options.Delimiter = "::"
$options.HasHeaderRow = $true
$reader = [Dataplat.Dbatools.Csv.Reader.CsvDataReader]::new("data.csv", $options)
100,000 rows × 10 columns (.NET 8, AVX-512)
| Library | Time (ms) | vs Dataplat |
|---|---|---|
| Sep | 18 ms | 3.7x faster |
| Sylvan | 27 ms | 2.5x faster |
| Dataplat | 67 ms | baseline |
| CsvHelper | 76 ms | 1.1x slower |
| LumenWorks | 395 ms | 5.9x slower |
| Library | Time (ms) | vs Dataplat |
|---|---|---|
| Sep | 30 ms | 1.8x faster |
| Sylvan | 35 ms | 1.6x faster |
| Dataplat | 55 ms | baseline |
| CsvHelper | 97 ms | 1.8x slower |
| LumenWorks | 102 ms | 1.9x slower |
Sep and Sylvan are faster for raw parsing. Dataplat wins for complete database workflows: IDataReader + compression + progress + messy data handling.
Why the gap? Sep/Sylvan use Span<T> and defer string allocation. The IDataReader interface requires returning actual objects—a fundamental tradeoff for SqlBulkCopy compatibility.
The right tool for database import workflows
Install Dataplat.Dbatools.Csv and start processing CSV files faster today.