Upload File to Ms Sql Server Using Javascript
node-mssql
Microsoft SQL Server client for Node.js
Supported TDS drivers:
- Tedious (pure JavaScript - Windows/macOS/Linux, default)
- Microsoft / Contributors Node V8 Driver for Node.js for SQL Server (v2 native - Windows or Linux/macOS 64 bits only)
Installation
Brusque Case: Use Connect String
const sql = crave ( ' mssql ' ) async () => { try { // make certain that any items are correctly URL encoded in the connection string wait sql . connect ( ' Server=localhost,1433;Database=database;User Id=username;Password=countersign;Encrypt=truthful ' ) const event = await sql . query `select * from mytable where id = ${ value } ` console . dir ( result ) } catch ( err ) { // ... error checks } }
If you're on Windows Azure, add ?encrypt=true
to your connection string. See docs to learn more.
Parts of the connection URI should be correctly URL encoded so that the URI can be parsed correctly.
Longer Instance: Connect via Config Object
Assuming y'all have set the advisable environment variables, y'all can construct a config object equally follows:
const sql = require ( ' mssql ' ) const sqlConfig = { user : process . env . DB_USER , password : process . env . DB_PWD , database : process . env . DB_NAME , server : ' localhost ' , pool : { max : 10 , min : 0 , idleTimeoutMillis : 30000 }, options : { encrypt : truthful , // for azure trustServerCertificate : false // change to true for local dev / self-signed certs } } async () => { effort { // make certain that whatever items are correctly URL encoded in the connectedness cord await sql . connect ( sqlConfig ) const event = await sql . query `select * from mytable where id = ${ value } ` panel . dir ( event ) } catch ( err ) { // ... fault checks } }
Documentation
Examples
- Async/Wait
- Promises
- ES6 Tagged template literals
- Callbacks
- Streaming
- Connection Pools
Configuration
- General
- Formats
Drivers
- Tedious
- Microsoft / Contributors Node V8 Commuter for Node.js for SQL Server
Connections
- Puddle Management
- ConnectionPool
- connect
- close
Requests
- Request
- execute
- input
- output
- toReadableStream
- pipe
- query
- batch
- majority
- cancel
Transactions
- Transaction
- begin
- commit
- rollback
Prepared Statements
- PreparedStatement
- input
- output
- prepare
- execute
- unprepare
Other
- CLI
- Geography and Geometry
- Table-Valued Parameter
- Response Schema
- Affected Rows
- JSON support
- Handling Duplicate Cavalcade Names
- Errors
- Informational messages
- Metadata
- Data Types
- SQL injection
- Known Issues
- Contributing
- 6.ten to 7.10 changes
- 5.x to 6.x changes
- 4.x to 5.x changes
- 3.ten to iv.x changes
- three.10 Documentation
Examples
Config
const config = { user : ' ... ' , password : ' ... ' , server : ' localhost ' , // You tin use 'localhost\\instance' to connect to named case database : ' ... ' , }
Async/Await
const sql = require ( ' mssql ' ) ( async function () { try { let pool = look sql . connect ( config ) let result1 = look puddle . request () . input ( ' input_parameter ' , sql . Int , value ) . query ( ' select * from mytable where id = @input_parameter ' ) panel . dir ( result1 ) // Stored process let result2 = expect pool . request () . input ( ' input_parameter ' , sql . Int , value ) . output ( ' output_parameter ' , sql . VarChar ( 50 )) . execute ( ' procedure_name ' ) console . dir ( result2 ) } catch ( err ) { // ... mistake checks } })() sql . on ( ' error ' , err => { // ... mistake handler })
Promises
Queries
const sql = crave ( ' mssql ' ) sql . on ( ' error ' , err => { // ... mistake handler }) sql . connect ( config ). then ( puddle => { // Query return pool . request () . input ( ' input_parameter ' , sql . Int , value ) . query ( ' select * from mytable where id = @input_parameter ' ) }). and so ( result => { console . dir ( result ) }). grab ( err => { // ... error checks });
Stored procedures
const sql = require ( ' mssql ' ) sql . on ( ' error ' , err => { // ... error handler }) sql . connect ( config ). then ( pool => { // Stored process return pool . request () . input ( ' input_parameter ' , sql . Int , value ) . output ( ' output_parameter ' , sql . VarChar ( l )) . execute ( ' procedure_name ' ) }). then ( result => { panel . dir ( upshot ) }). grab ( err => { // ... mistake checks })
Native Hope is used by default. You can easily change this with sql.Promise = crave('myownpromisepackage')
.
ES6 Tagged template literals
const sql = require ( ' mssql ' ) sql . connect ( config ). then (() => { render sql . query `select * from mytable where id = ${ value } ` }). then ( outcome => { console . dir ( result ) }). grab ( err => { // ... error checks }) sql . on ( ' error ' , err => { // ... error handler })
All values are automatically sanitized against sql injection. This is because it is rendered equally prepared statement, and thus all limitations imposed in MS SQL on parameters apply. eastward.k. Cavalcade names cannot be passed/set up in statements using variables.
Callbacks
const sql = require ( ' mssql ' ) sql . connect ( config , err => { // ... error checks // Query new sql . Asking (). query ( ' select one as number ' , ( err , result ) => { // ... error checks console . dir ( result ) }) // Stored Procedure new sql . Request () . input ( ' input_parameter ' , sql . Int , value ) . output ( ' output_parameter ' , sql . VarChar ( 50 )) . execute ( ' procedure_name ' , ( err , outcome ) => { // ... fault checks console . dir ( result ) }) // Using template literal const request = new sql . Request () asking . query ( asking . template `select * from mytable where id = ${ value } ` , ( err , result ) => { // ... mistake checks console . dir ( outcome ) }) }) sql . on ( ' error ' , err => { // ... error handler })
Streaming
If you lot plan to work with large amount of rows, you should ever use streaming. One time you enable this, you must heed for events to receive data.
const sql = require ( ' mssql ' ) sql . connect ( config , err => { // ... error checks const request = new sql . Request () request . stream = truthful // You lot can set streaming differently for each request request . query ( ' select * from verylargetable ' ) // or request.execute(procedure) asking . on ( ' recordset ' , columns => { // Emitted once for each recordset in a query }) request . on ( ' row ' , row => { // Emitted for each row in a recordset }) request . on ( ' rowsaffected ' , rowCount => { // Emitted for each `INSERT`, `UPDATE` or `DELETE` statement // Requires NOCOUNT to be OFF (default) }) request . on ( ' error ' , err => { // May be emitted multiple times }) request . on ( ' done ' , result => { // Always emitted equally the final i }) }) sql . on ( ' error ' , err => { // ... error handler })
When streaming large sets of information you want to dorsum-off or chunk the amount of data you're processing to prevent memory exhaustion issues; you can employ the Request.pause()
function to do this. Here is an example of managing rows in batches of 15:
let rowsToProcess = []; request . on ( ' row ' , row => { rowsToProcess . push button ( row ); if ( rowsToProcess . length >= 15 ) { request . pause (); processRows (); } }); asking . on ( ' done ' , () => { processRows (); }); function processRows () { // procedure rows rowsToProcess = []; request . resume (); }
Connectedness Pools
An important concept to empathise when using this library is Connection Pooling every bit this library uses connection pooling extensively. Equally ane Node JS process is able to handle multiple requests at once, we can accept advantage of this long running process to create a pool of database connections for reuse; this saves overhead of connecting to the database for each request (as would be the instance in something like PHP, where one procedure handles one request).
With the advantages of pooling comes some added complexities, just these are generally just conceptual and once you understand how the pooling is working, it is simple to make employ of it efficiently and effectively.
The Global Connection Pool
To aid with pool direction in your awarding in that location is the sql.connect()
role that is used to connect to the global connection puddle. You tin make repeated calls to this function, and if the global pool is already connected, it volition resolve to the connected puddle. The post-obit example obtains the global connection pool by running sql.connect()
, and and then runs the query against the pool.
NB: It's of import to note that there can only be one global connection pool continued at a time. Providing a dissimilar connection config to the connect()
function will not create a new connection if it is already connected.
const sql = crave ( ' mssql ' ) const config = { ... } // run a query against the global connection pool function runQuery ( query ) { // sql.connect() will return the existing global pool if information technology exists or create a new ane if it doesn't return sql . connect ( config ). then (( puddle ) => { return pool . query ( query ) }) }
Awaiting or .and so
-ing the pool creation is a prophylactic way to ensure that the pool is always set up, without knowing where it is needed get-go. In practice, once the pool is created then there will be no filibuster for the next connect()
call.
Likewise find that we do not close the global pool by calling sql.close()
after the query is executed, because other queries may need to be run against this pool and endmost it will add together additional overhead to running subsequent queries. You should only ever close the global pool if you're sure the application is finished. Or for instance, if yous are running some kind of CLI tool or a CRON task you can shut the pool at the end of the script.
Global Pool Single Instance
The ability to call connect()
and shut()
repeatedly on the global puddle is intended to brand pool management easier, however it is better to maintain your own reference to the pool, where connect()
is called once, and the resulting global pool'south connection promise is re-used throughout the entire application.
For example, in Express applications, the following approach uses a single global pool example added to the app.locals
so the awarding has access to it when needed. The server commencement is then chained inside the connect()
promise.
const express = require ( ' limited ' ) const sql = require ( ' mssql ' ) const config = { /*...*/ } //instantiate a connection pool const appPool = new sql . ConnectionPool ( config ) //require route handlers and apply the aforementioned connection pool everywhere const route1 = require ( ' ./routes/route1 ' ) const app = limited () app . get ( ' /path ' , route1 ) //connect the pool and starting time the web server when done appPool . connect (). then ( function ( pool ) { app . locals . db = pool ; const server = app . mind ( 3000 , function () { const host = server . accost (). accost const port = server . address (). port panel . log ( ' Case app listening at http://%due south:%s ' , host , port ) }) }). grab ( function ( err ) { console . mistake ( ' Error creating connexion pool ' , err ) });
And so the road uses the connection pool in the app.locals
object:
// ./routes/route1.js const sql = require ( ' mssql ' ); module . exports = part ( req , res ) { req . locals . db . query ( ' SELECT Tiptop 10 * FROM table_name ' , part ( err , recordset ) { if ( err ) { panel . error ( err ) res . status ( 500 ). ship ( ' SERVER ERROR ' ) return } res . status ( 200 ). json ({ message : ' success ' }) }) }
Advanced Pool Management
For some apply-cases you may want to implement your own connexion pool management, rather than using the global connection pool. Reasons for doing this include:
- Supporting connections to multiple databases
- Creation of split up pools for read vs read/write operations
The following lawmaking is an case of a custom connectedness pool implementation.
// pool-manager.js const mssql = require ( ' mssql ' ) const pools = new Map (); module . exports = { /** * Become or create a pool. If a pool doesn't exist the config must be provided. * If the pool does exist the config is ignored (even if it was unlike to the ane provided * when creating the pool) * * @param {string} name * @param [config] * @return {Promise.<mssql.ConnectionPool>} */ get : ( proper noun , config ) => { if ( ! pools . has ( name )) { if ( ! config ) { throw new Error ( ' Pool does not exist ' ); } const puddle = new mssql . ConnectionPool ( config ); // automatically remove the puddle from the cache if `puddle.close()` is called const close = puddle . shut . bind ( pool ); pool . close = (... args ) => { pools . delete ( proper noun ); return close (... args ); } pools . set ( name , pool . connect ()); } return pools . get ( proper name ); }, /** * Closes all the pools and removes them from the store * * @return {Promise<mssql.ConnectionPool[]>} */ closeAll : () => Hope . all ( Array . from ( pools . values ()). map (( connect ) => { return connect . then (( pool ) => pool . close ()); })), };
This file tin can so be used in your application to create, fetch, and close pools.
const { get } = require ( ' ./pool-manager ' ) async part example () { const pool = await get ( ' default ' ) return pool . request (). query ( ' SELECT one ' ) }
Similar to the global connectedness puddle, you should aim to only close a pool when you lot know it will never be needed by the application over again. Typically this volition merely be when your application is shutting down.
Result value manipulation
In some instances it is desirable to manipulate the tape data equally it is returned from the database, this may be to cast information technology as a particular object (eg: moment
object instead of Date
) or similar.
In v8.0.0+ it is possible to annals per-datatype handlers:
const sql = require ( ' mssql ' ) // in this example all integer values volition return one more than their actual value in the database sql . valueHandler . set ( sql . TYPES . Int , ( value ) => value + one ) sql . query ( ' SELECT * FROM [example] ' ). and then (( result ) => { // all `int` columns will render a manipulated value every bit per the callback in a higher place })
Configuration
The following is an example configuration object:
const config = { user : ' ... ' , countersign : ' ... ' , server : ' localhost ' , database : ' ... ' , pool : { max : 10 , min : 0 , idleTimeoutMillis : 30000 } }
General (same for all drivers)
- user - User proper noun to use for authentication.
- countersign - Password to use for authentication.
- server - Server to connect to. Yous tin employ 'localhost\instance' to connect to named case.
- port - Port to connect to (default:
1433
). Don't prepare when connecting to named instance. - domain - Once you set domain, driver volition connect to SQL Server using domain login.
- database - Database to connect to (default: dependent on server configuration).
- connectionTimeout - Connection timeout in ms (default:
15000
). - requestTimeout - Request timeout in ms (default:
15000
). NOTE: msnodesqlv8 driver doesn't support timeouts < 1 second. When passed via connection string, the key must existasking timeout
- stream - Stream recordsets/rows instead of returning them all at once equally an argument of callback (default:
faux
). You can also enable streaming for each request independently (asking.stream = true
). Always set totrue
if yous plan to piece of work with large amount of rows. - parseJSON - Parse JSON recordsets to JS objects (default:
false
). For more than data delight meet section JSON back up. - puddle.max - The maximum number of connections there can be in the pool (default:
10
). - pool.min - The minimum of connections there can be in the puddle (default:
0
). - pool.idleTimeoutMillis - The Number of milliseconds earlier endmost an unused connection (default:
30000
). - arrayRowMode - Return row results as a an assortment instead of a keyed object. Too adds
columns
array. Run into Handling Duplicate Column Names
Complete list of pool options tin can be found here.
Formats
In improver to configuration object in that location is an option to pass config as a connection cord. Connection strings are supported.
Classic Connectedness Cord
Server=localhost,1433;Database=database;User Id=username;Password=password;Encrypt=true Driver=msnodesqlv8;Server=(local)\INSTANCE;Database=database;UID=DOMAIN\username;PWD=countersign;Encrypt=true
Drivers
Boring
Default driver, actively maintained and production ready. Platform independent, runs everywhere Node.js runs. Officially supported by Microsoft.
Extra options:
- beforeConnect(conn) - Role, which is invoked before opening the connection. The parameter
conn
is the configured ho-humConnection
. It can exist used for attaching event handlers like in this instance:require ( ' mssql ' ). connect ({... config , beforeConnect : conn => { conn . once ( ' connect ' , err => { err ? console . mistake ( err ) : console . log ( ' mssql connected ' )}) conn . once ( ' end ' , err => { err ? panel . fault ( err ) : console . log ( ' mssql disconnected ' )}) }})
- options.instanceName - The instance proper noun to connect to. The SQL Server Browser service must be running on the database server, and UDP port 1434 on the database server must be reachable.
- options.useUTC - A boolean determining whether or not use UTC fourth dimension for values without time zone showtime (default:
true
). - options.encrypt - A boolean determining whether or non the connectedness will be encrypted (default:
true
). - options.tdsVersion - The version of TDS to use (default:
7_4
, available:7_1
,7_2
,7_3_A
,7_3_B
,7_4
). - options.appName - Awarding proper noun used for SQL server logging.
- options.abortTransactionOnError - A boolean determining whether to rollback a transaction automatically if any error is encountered during the given transaction's execution. This sets the value for
XACT_ABORT
during the initial SQL phase of a connectedness.
Authentication:
On meridian of the extra options, an authentication
property can be added to the pool config option
- authentication - An object with authentication settings, according to the Tedious Documentation. Passing this object will override
user
,password
,domain
settings. - hallmark.type - Type of the authentication method, valid types are
default
,ntlm
,azure-active-directory-password
,azure-active-directory-access-token
,azure-agile-directory-msi-vm
, orazure-active-directory-msi-app-service
- authentication.options - Options of the authentication required by the
irksome
commuter, depends onauthentication.blazon
. For more details, check Tedious Authentication Interfaces
More than information about Boring specific options: http://tediousjs.github.io/tedious/api-connection.html
Microsoft / Contributors Node V8 Driver for Node.js for SQL Server
Requires Node.js v10+ or newer. Windows 32-64 $.25 or Linux/macOS 64 $.25 simply. This driver is not part of the default package and must be installed separately by npm install msnodesqlv8@^ii
. To apply this driver, use this crave syntax: const sql = require('mssql/msnodesqlv8')
.
Note: If you apply import into your lib to fix your request (const { VarChar } = require('mssql')
) you also need to upgrade all your types import into your code (const { VarChar } = crave('mssql/msnodesqlv8')
) or a connexion.on is not a function
error will be thrown.
Extra options:
- beforeConnect(conn) - Part, which is invoked before opening the connectedness. The parameter
conn
is the connection configuration, that can be modified to pass extra parameters to the commuter'sopen()
method. - connectionString - Connectedness string (default: see below).
- options.instanceName - The instance name to connect to. The SQL Server Browser service must be running on the database server, and UDP port 1444 on the database server must be reachable.
- options.trustedConnection - Use Windows Authentication (default:
fake
). - options.useUTC - A boolean determining whether or not to use UTC time for values without time zone get-go (default:
truthful
).
Default connection string when connecting to port:
Driver={SQL Server Native Client 11.0};Server={#{server},#{port}};Database={#{database}};Uid={#{user}};Pwd={#{password}};Trusted_Connection={#{trusted}};
Default connectedness string when connecting to named case:
Driver={SQL Server Native Customer xi.0};Server={#{server}\\#{instance}};Database={#{database}};Uid={#{user}};Pwd={#{countersign}};Trusted_Connection={#{trusted}};
Please notation that the connectedness string with this driver is not the same than boring and use yep/no instead of true/false. You can come across more than on the ODBC documentation.
Connections
Internally, each ConnectionPool
instance is a separate pool of TDS connections. Once you lot create a new Asking
/Transaction
/Prepared Statement
, a new TDS connection is acquired from the puddle and reserved for desired action. Once the activity is complete, connection is released back to the pool. Connection health bank check is built-in and then one time the dead connectedness is discovered, it is immediately replaced with a new one.
IMPORTANT: E'er attach an mistake
listener to created connection. Whenever something goes wrong with the connection it will emit an mistake and if there is no listener it will crash your application with an uncaught error.
const puddle = new sql . ConnectionPool ({ /* config */ })
Events
- fault(err) - Dispatched on connection fault.
connect ([callback])
Create a new connection pool. The initial probe connectedness is created to notice out whether the configuration is valid.
Arguments
- callback(err) - A callback which is called after initial probe connectedness has established, or an error has occurred. Optional. If omitted, returns Hope.
Example
const pool = new sql . ConnectionPool ({ user : ' ... ' , password : ' ... ' , server : ' localhost ' , database : ' ... ' }) pool . connect ( err => { // ... })
Errors
- ELOGIN (
ConnectionError
) - Login failed. - ETIMEOUT (
ConnectionError
) - Connexion timeout. - EALREADYCONNECTED (
ConnectionError
) - Database is already connected! - EALREADYCONNECTING (
ConnectionError
) - Already connecting to database! - EINSTLOOKUP (
ConnectionError
) - Instance lookup failed. - ESOCKET (
ConnectionError
) - Socket error.
shut()
Close all active connections in the pool.
Example
Request
const request = new sql . Asking ( /* [pool or transaction] */ )
If you omit puddle/transaction argument, global pool is used instead.
Events
- recordset(columns) - Dispatched when metadata for new recordset are parsed.
- row(row) - Dispatched when new row is parsed.
- washed(returnValue) - Dispatched when request is complete.
- error(err) - Dispatched on error.
- info(message) - Dispatched on informational message.
execute (procedure, [callback])
Telephone call a stored procedure.
Arguments
- procedure - Name of the stored procedure to be executed.
- callback(err, recordsets, returnValue) - A callback which is called after execution has completed, or an error has occurred.
returnValue
is besides accessible as property of recordsets. Optional. If omitted, returns Promise.
Example
const request = new sql . Request () asking . input ( ' input_parameter ' , sql . Int , value ) request . output ( ' output_parameter ' , sql . Int ) request . execute ( ' procedure_name ' , ( err , result ) => { // ... mistake checks console . log ( result . recordsets . length ) // count of recordsets returned by the procedure panel . log ( result . recordsets [ 0 ]. length ) // count of rows contained in first recordset console . log ( effect . recordset ) // kickoff recordset from issue.recordsets console . log ( outcome . returnValue ) // procedure return value console . log ( issue . output ) // key/value collection of output values panel . log ( outcome . rowsAffected ) // array of numbers, each number represents the number of rows affected by executed statemens // ... })
Errors
- EREQUEST (
RequestError
) - Bulletin from SQL Server - ECANCEL (
RequestError
) - Cancelled. - ETIMEOUT (
RequestError
) - Request timeout. - ENOCONN (
RequestError
) - No connection is specified for that request. - ENOTOPEN (
ConnectionError
) - Connection not however open up. - ECONNCLOSED (
ConnectionError
) - Connexion is closed. - ENOTBEGUN (
TransactionError
) - Transaction has non begun. - EABORT (
TransactionError
) - Transaction was aborted (by user or considering of an fault).
input (proper name, [type], value)
Add an input parameter to the request.
Arguments
- name - Name of the input parameter without @ char.
- type - SQL data type of input parameter. If you omit blazon, module automatically make up one's mind which SQL data type should be used based on JS information type.
- value - Input parameter value.
undefined
andNaN
values are automatically converted tonix
values.
Example
asking . input ( ' input_parameter ' , value ) request . input ( ' input_parameter ' , sql . Int , value )
JS Data Type To SQL Data Blazon Map
-
String
->sql.NVarChar
-
Number
->sql.Int
-
Boolean
->sql.Bit
-
Date
->sql.DateTime
-
Buffer
->sql.VarBinary
-
sql.Table
->sql.TVP
Default data type for unknown object is sql.NVarChar
.
You can define your ain type map.
sql . map . register ( MyClass , sql . Text )
You can besides overwrite the default type map.
sql . map . register ( Number , sql . BigInt )
Errors (synchronous)
- EARGS (
RequestError
) - Invalid number of arguments. - EINJECT (
RequestError
) - SQL injection warning.
NB: Exercise not apply parameters @p{n}
as these are used by the internal drivers and cause a disharmonize.
output (name, type, [value])
Add an output parameter to the request.
Arguments
- name - Name of the output parameter without @ char.
- type - SQL information type of output parameter.
- value - Output parameter value initial value.
undefined
andNaN
values are automatically converted tocipher
values. Optional.
Case
request . output ( ' output_parameter ' , sql . Int ) request . output ( ' output_parameter ' , sql . VarChar ( 50 ), ' abc ' )
Errors (synchronous)
- EARGS (
RequestError
) - Invalid number of arguments. - EINJECT (
RequestError
) - SQL injection warning.
toReadableStream
Convert asking to a Node.js ReadableStream
Example
const { pipeline } = require ( ' stream ' ) const asking = new sql . Asking () const readableStream = request . toReadableStream () pipeline ( readableStream , transformStream , writableStream ) request . query ( ' select * from mytable ' )
OR if yous wanted to increase the highWaterMark of the read stream to buffer more than rows in retentiveness
const { pipeline } = require ( ' stream ' ) const request = new sql . Request () const readableStream = request . toReadableStream ({ highWaterMark : 100 }) pipeline ( readableStream , transformStream , writableStream ) request . query ( ' select * from mytable ' )
piping (stream)
Sets request to stream
style and pulls all rows from all recordsets to a given stream.
Arguments
- stream - Writable stream in object style.
Example
const request = new sql . Request () request . pipe ( stream ) asking . query ( ' select * from mytable ' ) stream . on ( ' error ' , err => { // ... }) stream . on ( ' finish ' , () => { // ... })
query (command, [callback])
Execute the SQL control. To execute commands like create procedure
or if you lot plan to work with local temporary tables, use batch instead.
Arguments
- command - T-SQL command to be executed.
- callback(err, recordset) - A callback which is called later execution has completed, or an error has occurred. Optional. If omitted, returns Promise.
Example
const request = new sql . Request () asking . query ( ' select i as number ' , ( err , effect ) => { // ... fault checks console . log ( result . recordset [ 0 ]. number ) // return i // ... })
Errors
- ETIMEOUT (
RequestError
) - Request timeout. - EREQUEST (
RequestError
) - Message from SQL Server - ECANCEL (
RequestError
) - Cancelled. - ENOCONN (
RequestError
) - No connection is specified for that request. - ENOTOPEN (
ConnectionError
) - Connection not withal open. - ECONNCLOSED (
ConnectionError
) - Connection is airtight. - ENOTBEGUN (
TransactionError
) - Transaction has non begun. - EABORT (
TransactionError
) - Transaction was aborted (by user or considering of an mistake).
const request = new sql . Request () asking . query ( ' select 1 every bit number; select 2 as number ' , ( err , result ) => { // ... error checks panel . log ( result . recordset [ 0 ]. number ) // return 1 console . log ( effect . recordsets [ 0 ][ 0 ]. number ) // render i panel . log ( result . recordsets [ 1 ][ 0 ]. number ) // return 2 })
NOTE: To get number of rows affected by the statement(due south), come across section Affected Rows.
batch (batch, [callback])
Execute the SQL command. Different query, information technology doesn't use sp_executesql
, then is not likely that SQL Server will reuse the execution program it generates for the SQL. Use this only in special cases, for case when you demand to execute commands similar create procedure
which tin't be executed with query or if you're executing statements longer than 4000 chars on SQL Server 2000. Also y'all should use this if you're plan to work with local temporary tables (more information here).
Note: Table-Valued Parameter (TVP) is not supported in batch.
Arguments
- batch - T-SQL command to exist executed.
- callback(err, recordset) - A callback which is chosen after execution has completed, or an fault has occurred. Optional. If omitted, returns Promise.
Example
const request = new sql . Asking () request . batch ( ' create procedure #temporary as select * from table ' , ( err , result ) => { // ... error checks })
Errors
- ETIMEOUT (
RequestError
) - Asking timeout. - EREQUEST (
RequestError
) - Message from SQL Server - ECANCEL (
RequestError
) - Cancelled. - ENOCONN (
RequestError
) - No connectedness is specified for that request. - ENOTOPEN (
ConnectionError
) - Connection non nevertheless open up. - ECONNCLOSED (
ConnectionError
) - Connexion is closed. - ENOTBEGUN (
TransactionError
) - Transaction has not begun. - EABORT (
TransactionError
) - Transaction was aborted (past user or because of an error).
You lot tin can enable multiple recordsets in queries with the request.multiple = true
command.
majority (table, [options,] [callback])
Perform a bulk insert.
Arguments
- table -
sql.Tabular array
example. - options - Options object to be passed through to driver (currently tedious only). Optional. If statement is a function information technology will be treated every bit the callback.
- callback(err, rowCount) - A callback which is called afterward bulk insert has completed, or an mistake has occurred. Optional. If omitted, returns Hope.
Example
const tabular array = new sql . Table ( ' table_name ' ) // or temporary tabular array, e.g. #temptable table . create = true table . columns . add ( ' a ' , sql . Int , { nullable : true , primary : truthful }) table . columns . add ( ' b ' , sql . VarChar ( 50 ), { nullable : simulated }) table . rows . add ( 777 , ' test ' ) const request = new sql . Request () request . bulk ( table , ( err , result ) => { // ... error checks })
IMPORTANT: Always indicate whether the column is nullable or not!
TIP: If y'all set tabular array.create
to true
, module will check if the table exists before it start sending data. If it doesn't, information technology will automatically create information technology. You can specify primary key columns past setting master: truthful
to column'due south options. Primary key constraint on multiple columns is supported.
TIP: You can besides create Table variable from whatever recordset with recordset.toTable()
. You can optionally specify table blazon proper noun in the first argument.
Errors
- ENAME (
RequestError
) - Tabular array name must exist specified for bulk insert. - ETIMEOUT (
RequestError
) - Asking timeout. - EREQUEST (
RequestError
) - Message from SQL Server - ECANCEL (
RequestError
) - Cancelled. - ENOCONN (
RequestError
) - No connection is specified for that request. - ENOTOPEN (
ConnectionError
) - Connection not yet open up. - ECONNCLOSED (
ConnectionError
) - Connection is closed. - ENOTBEGUN (
TransactionError
) - Transaction has not begun. - EABORT (
TransactionError
) - Transaction was aborted (past user or because of an fault).
cancel()
Cancel currently executing asking. Return true
if cancellation package was transport successfully.
Example
const request = new sql . Request () request . query ( ' waitfor delay \' 00:00:05 \' ; select i as number ' , ( err , result ) => { console . log ( err instanceof sql . RequestError ) // true console . log ( err . bulletin ) // Cancelled. console . log ( err . code ) // ECANCEL // ... }) request . cancel ()
Transaction
IMPORTANT: always utilise Transaction
form to create transactions - it ensures that all your requests are executed on 1 connection. Once you call begin
, a unmarried connection is acquired from the connexion pool and all subsequent requests (initialized with the Transaction
object) are executed exclusively on this connectedness. Afterward you telephone call commit
or rollback
, connectedness is and so released dorsum to the connexion pool.
const transaction = new sql . Transaction ( /* [pool] */ )
If you omit connectedness argument, global connection is used instead.
Instance
const transaction = new sql . Transaction ( /* [puddle] */ ) transaction . brainstorm ( err => { // ... error checks const request = new sql . Request ( transaction ) request . query ( ' insert into mytable (mycolumn) values (12345) ' , ( err , result ) => { // ... mistake checks transaction . commit ( err => { // ... error checks panel . log ( " Transaction committed. " ) }) }) })
Transaction can likewise exist created by const transaction = pool.transaction()
. Requests can besides be created by const request = transaction.request()
.
Aborted transactions
This case shows how you should correctly handle transaction errors when abortTransactionOnError
(XACT_ABORT
) is enabled. Added in 2.0.
const transaction = new sql . Transaction ( /* [pool] */ ) transaction . begin ( err => { // ... error checks let rolledBack = false transaction . on ( ' rollback ' , aborted => { // emited with aborted === truthful rolledBack = truthful }) new sql . Request ( transaction ) . query ( ' insert into mytable (bitcolumn) values (2) ' , ( err , result ) => { // insert should neglect because of invalid value if ( err ) { if ( ! rolledBack ) { transaction . rollback ( err => { // ... fault checks }) } } else { transaction . commit ( err => { // ... error checks }) } }) })
Events
- begin - Dispatched when transaction brainstorm.
- commit - Dispatched on successful commit.
- rollback(aborted) - Dispatched on successful rollback with an argument determining if the transaction was aborted (by user or because of an fault).
brainstorm ([isolationLevel], [callback])
Begin a transaction.
Arguments
- isolationLevel - Controls the locking and row versioning behavior of TSQL statements issued by a connection. Optional.
READ_COMMITTED
by default. For possible values seesql.ISOLATION_LEVEL
. - callback(err) - A callback which is called later on transaction has began, or an error has occurred. Optional. If omitted, returns Hope.
Instance
const transaction = new sql . Transaction () transaction . brainstorm ( err => { // ... error checks })
Errors
- ENOTOPEN (
ConnectionError
) - Connectedness not yet open up. - EALREADYBEGUN (
TransactionError
) - Transaction has already begun.
commit ([callback])
Commit a transaction.
Arguments
- callback(err) - A callback which is called later transaction has committed, or an error has occurred. Optional. If omitted, returns Promise.
Example
const transaction = new sql . Transaction () transaction . begin ( err => { // ... mistake checks transaction . commit ( err => { // ... error checks }) })
Errors
- ENOTBEGUN (
TransactionError
) - Transaction has not begun. - EREQINPROG (
TransactionError
) - Tin can't commit transaction. In that location is a request in progress.
rollback ([callback])
Rollback a transaction. If the queue isn't empty, all queued requests will be Cancelled and the transaction will be marked as aborted.
Arguments
- callback(err) - A callback which is chosen after transaction has rolled back, or an fault has occurred. Optional. If omitted, returns Promise.
Instance
const transaction = new sql . Transaction () transaction . begin ( err => { // ... error checks transaction . rollback ( err => { // ... error checks }) })
Errors
- ENOTBEGUN (
TransactionError
) - Transaction has not begun. - EREQINPROG (
TransactionError
) - Tin can't rollback transaction. There is a request in progress.
Prepared Statement
IMPORTANT: always use PreparedStatement
class to create prepared statements - it ensures that all your executions of prepared statement are executed on one connection. In one case you call prepare
, a single connection is caused from the connection pool and all subsequent executions are executed exclusively on this connexion. After you call unprepare
, the connectedness is then released back to the connection pool.
const ps = new sql . PreparedStatement ( /* [puddle] */ )
If y'all omit the connectedness statement, the global connection is used instead.
Example
const ps = new sql . PreparedStatement ( /* [puddle] */ ) ps . input ( ' param ' , sql . Int ) ps . prepare ( ' select @param as value ' , err => { // ... fault checks ps . execute ({ param : 12345 }, ( err , result ) => { // ... error checks // release the connection subsequently queries are executed ps . unprepare ( err => { // ... error checks }) }) })
IMPORTANT: Recollect that each prepared statement means one reserved connectedness from the pool. Don't forget to unprepare a prepared statement when you lot've finished your queries!
You can execute multiple queries against the same prepared statement but you must unprepare the statement when you have finished using information technology otherwise you lot will crusade the connection puddle to run out of available connections.
TIP: You can likewise create prepared statements in transactions (new sql.PreparedStatement(transaction)
), but proceed in mind you lot can't execute other requests in the transaction until you telephone call unprepare
.
input (name, blazon)
Add an input parameter to the prepared statement.
Arguments
- proper name - Proper noun of the input parameter without @ char.
- type - SQL information blazon of input parameter.
Example
ps . input ( ' input_parameter ' , sql . Int ) ps . input ( ' input_parameter ' , sql . VarChar ( 50 ))
Errors (synchronous)
- EARGS (
PreparedStatementError
) - Invalid number of arguments. - EINJECT (
PreparedStatementError
) - SQL injection warning.
output (name, type)
Add an output parameter to the prepared statement.
Arguments
- name - Name of the output parameter without @ char.
- type - SQL data type of output parameter.
Example
ps . output ( ' output_parameter ' , sql . Int ) ps . output ( ' output_parameter ' , sql . VarChar ( 50 ))
Errors (synchronous)
- EARGS (
PreparedStatementError
) - Invalid number of arguments. - EINJECT (
PreparedStatementError
) - SQL injection warning.
gear up (statement, [callback])
Prepare a argument.
Arguments
- statement - T-SQL statement to prepare.
- callback(err) - A callback which is called later on preparation has completed, or an fault has occurred. Optional. If omitted, returns Promise.
Instance
const ps = new sql . PreparedStatement () ps . prepare ( ' select @param as value ' , err => { // ... error checks })
Errors
- ENOTOPEN (
ConnectionError
) - Connectedness non yet open. - EALREADYPREPARED (
PreparedStatementError
) - Statement is already prepared. - ENOTBEGUN (
TransactionError
) - Transaction has non begun.
execute (values, [callback])
Execute a prepared argument.
Arguments
- values - An object whose names stand for to the names of parameters that were added to the prepared argument before it was prepared.
- callback(err) - A callback which is called afterwards execution has completed, or an error has occurred. Optional. If omitted, returns Hope.
Example
const ps = new sql . PreparedStatement () ps . input ( ' param ' , sql . Int ) ps . prepare ( ' select @param every bit value ' , err => { // ... error checks ps . execute ({ param : 12345 }, ( err , result ) => { // ... fault checks console . log ( result . recordset [ 0 ]. value ) // return 12345 console . log ( result . rowsAffected ) // Returns number of affected rows in case of INSERT, UPDATE or DELETE statement. ps . unprepare ( err => { // ... error checks }) }) })
You can as well stream executed asking.
const ps = new sql . PreparedStatement () ps . input ( ' param ' , sql . Int ) ps . set ( ' select @param as value ' , err => { // ... error checks ps . stream = truthful const asking = ps . execute ({ param : 12345 }) asking . on ( ' recordset ' , columns => { // Emitted once for each recordset in a query }) request . on ( ' row ' , row => { // Emitted for each row in a recordset }) request . on ( ' error ' , err => { // May be emitted multiple times }) request . on ( ' done ' , result => { // E'er emitted as the last one console . log ( result . rowsAffected ) // Returns number of affected rows in instance of INSERT, UPDATE or DELETE argument. ps . unprepare ( err => { // ... fault checks }) }) })
TIP: To larn more about how number of affected rows works, see section Afflicted Rows.
Errors
- ENOTPREPARED (
PreparedStatementError
) - Argument is non prepared. - ETIMEOUT (
RequestError
) - Request timeout. - EREQUEST (
RequestError
) - Message from SQL Server - ECANCEL (
RequestError
) - Cancelled.
unprepare ([callback])
Unprepare a prepared argument.
Arguments
- callback(err) - A callback which is chosen subsequently unpreparation has completed, or an mistake has occurred. Optional. If omitted, returns Promise.
Example
const ps = new sql . PreparedStatement () ps . input ( ' param ' , sql . Int ) ps . fix ( ' select @param as value ' , err => { // ... error checks ps . unprepare ( err => { // ... fault checks }) })
Errors
- ENOTPREPARED (
PreparedStatementError
) - Statement is non prepared.
CLI
If you want to add together the MSSQL CLI tool to your path, y'all must install it globally with npm install -g mssql
.
Setup
Create a .mssql.json
configuration file (anywhere). Structure of the file is the same as the standard configuration object.
{ "user" : "..." , "password" : "..." , "server" : "localhost" , "database" : "..." }
Example
echo "select * from mytable" | mssql /path/to/config
Results in:
[[{ "username" : "patriksimek" , "password" : "tooeasy" }]]
You tin also query for multiple recordsets.
repeat "select * from mytable; select * from myothertable" | mssql
Results in:
[[{ "username" : "patriksimek" , "countersign" : "tooeasy" }],[{ "id" : 15 , "proper name" : "Product proper name" }]]
If y'all omit config path argument, mssql will try to load it from current working directory.
Overriding config settings
You tin can override some config settings via CLI options (--user
, --password
, --server
, --database
, --port
).
echo "select * from mytable" | mssql /path/to/config --database anotherdatabase
Results in:
[[{ "username" : "onotheruser" , "password" : "quiteeasy" }]]
Geography and Geometry
node-mssql has built-in deserializer for Geography and Geometry CLR data types.
Geography
Geography types can exist synthetic several different ways. Refer carefully to documentation to verify the coordinate ordering; the ST methods tend to lodge parameters every bit longitude (x) so latitude (y), while custom CLR methods tend to prefer to guild them as latitude (y) then longitude (x).
The query:
select geography :: STGeomFromText ( N 'POLYGON((i 1, three 1, 3 one, one 1))' , 4326 )
results in:
{ srid : 4326 , version : 2 , points : [ Bespeak { lat : 1 , lng : 1 , z : null , m : null }, Point { lat : 1 , lng : 3 , z : zip , m : null }, Point { lat : ane , lng : three , z : null , grand : zip }, Bespeak { lat : one , lng : i , z : null , m : null } ], figures : [ { attribute : 1 , pointOffset : 0 } ], shapes : [ { parentOffset : - 1 , figureOffset : 0 , type : 3 } ], segments : [] }
NOTE: You will besides run into ten
and y
coordinates in parsed Geography points, they are not recommended for use. They have thus been omitted from this example. For compatibility, they remain flipped (ten, the horizontal beginning, is instead used for latitude, the vertical), and thus risk misleading you. Prefer instead to use the lat
and lng
backdrop.
Geometry
Geometry types can also be synthetic in several ways. Unlike Geographies, they are consistent in e'er placing 10 before y. node-mssql decodes the issue of this query:
select geometry :: STGeomFromText ( N 'POLYGON((1 1, 3 1, 3 7, 1 i))' , 4326 )
into the JavaScript object:
{ srid : 4326 , version : i , points : [ Point { x : 1 , y : ane , z : zero , m : null }, Point { x : ane , y : 3 , z : zip , m : nada }, Bespeak { x : vii , y : 3 , z : cipher , chiliad : null }, Point { ten : 1 , y : 1 , z : cipher , m : zero } ], figures : [ { aspect : two , pointOffset : 0 } ], shapes : [ { parentOffset : - 1 , figureOffset : 0 , type : 3 } ], segments : [] }
Table-Valued Parameter (TVP)
Supported on SQL Server 2008 and later. You tin pass a data tabular array every bit a parameter to stored process. Start, we have to create custom type in our database.
CREATE Blazon TestType Equally TABLE ( a VARCHAR ( 50 ), b INT );
Next we will need a stored procedure.
CREATE PROCEDURE MyCustomStoredProcedure ( @ tvp TestType readonly ) As SELECT * FROM @ tvp
Now let's go back to our Node.js app.
const tvp = new sql . Tabular array () // Yous tin optionally specify table type proper name in the first statement. // Columns must correspond with type we accept created in database. tvp . columns . add ( ' a ' , sql . VarChar ( 50 )) tvp . columns . add ( ' b ' , sql . Int ) // Add rows tvp . rows . add ( ' hello tvp ' , 777 ) // Values are in same gild as columns.
Y'all can send tabular array as a parameter to stored procedure.
const request = new sql . Request () request . input ( ' tvp ' , tvp ) request . execute ( ' MyCustomStoredProcedure ' , ( err , result ) => { // ... error checks console . dir ( result . recordsets [ 0 ][ 0 ]) // {a: 'howdy tvp', b: 777} })
TIP: You can as well create Tabular array variable from whatsoever recordset with recordset.toTable()
. You can optionally specify table type name in the showtime argument.
You tin can clear the table rows for easier batching past using tabular array.rows.clear()
const tvp = new sql . Tabular array () // You can optionally specify table type proper name in the first statement. // Columns must correspond with type nosotros accept created in database. tvp . columns . add ( ' a ' , sql . VarChar ( fifty )) tvp . columns . add ( ' b ' , sql . Int ) // Add together rows tvp . rows . add ( ' hello tvp ' , 777 ) // Values are in same guild as columns. tvp . rows . clear ()
Response Schema
An object returned from a sucessful
basic query would expect like the following.
{ recordsets : [ [ { COL1 : " some content " , COL2 : " some more than content " } ] ], recordset : [ { COL1 : " some content " , COL2 : " some more content " } ], output : {}, rowsAffected : [ one ] }
Affected Rows
If you're performing INSERT
, UPDATE
or DELETE
in a query, you can read number of afflicted rows. The rowsAffected
variable is an assortment of numbers. Each number represents number of afflicted rows past a unmarried statement.
Instance using Promises
const request = new sql . Request () request . query ( ' update myAwesomeTable set up awesomness = 100 ' ). then ( result => { console . log ( upshot . rowsAffected ) })
Example using callbacks
const request = new sql . Request () request . query ( ' update myAwesomeTable set up awesomness = 100 ' , ( err , upshot ) => { console . log ( result . rowsAffected ) })
Instance using streaming
In addition to the rowsAffected aspect on the done event, each statement will emit the number of afflicted rows equally it is completed.
const asking = new sql . Request () request . stream = true request . query ( ' update myAwesomeTable ready awesomness = 100 ' ) request . on ( ' rowsaffected ' , rowCount => { console . log ( rowCount ) }) request . on ( ' done ' , result => { console . log ( result . rowsAffected ) })
JSON support
SQL Server 2016 introduced born JSON serialization. Past default, JSON is returned as a plain text in a special column named JSON_F52E2B61-18A1-11d1-B105-00805F49916B
.
Example
SELECT 1 AS 'a.b.c' , 2 As 'a.b.d' , 3 As 'a.x' , 4 AS 'a.y' FOR JSON PATH
Results in:
recordset = [ { ' JSON_F52E2B61-18A1-11d1-B105-00805F49916B ' : ' {"a":{"b":{"c":1,"d":ii},"x":3,"y":four}} ' } ]
You lot tin can enable congenital-in JSON parser with config.parseJSON = true
. Once you lot enable this, recordset volition contain rows of parsed JS objects. Given the same case, result will look like this:
recordset = [ { a : { b : { c : one , d : 2 }, 10 : 3 , y : 4 } } ]
Important: In order for this to work, in that location must be exactly one column named JSON_F52E2B61-18A1-11d1-B105-00805F49916B
in the recordset.
More information near JSON back up tin exist plant in official documentation.
Handling Indistinguishable Column Names
If your queries contain output columns with identical names, the default behaviour of mssql
will but return column metadata for the terminal cavalcade with that name. You will also not always be able to re-assemble the gild of output columns requested.
Default behaviour:
const request = new sql . Request () asking . query ( " select 'asdf' every bit proper name, 'qwerty' every bit other_name, 'jkl' every bit name " ) . then ( event => { console . log ( result ) });
Results in:
{ recordsets : [ [ { name : [ ' asdf ' , ' jkl ' ], other_name : ' qwerty ' } ] ], recordset : [ { name : [ ' asdf ' , ' jkl ' ], other_name : ' qwerty ' } ], output : {}, rowsAffected : [ 1 ] }
You lot tin use the arrayRowMode
configuration parameter to return the row values as arrays and add together a separate assortment of cavalcade values. arrayRowMode
tin be set globally during the initial connectedness, or per-asking.
const request = new sql . Request () request . arrayRowMode = true request . query ( " select 'asdf' as name, 'qwerty' as other_name, 'jkl' as name " ) . and so ( result => { console . log ( outcome ) });
Results in:
{ recordsets : [ [ [ ' asdf ' , ' qwerty ' , ' jkl ' ] ] ], recordset : [ [ ' asdf ' , ' qwerty ' , ' jkl ' ] ], output : {}, rowsAffected : [ 1 ], columns : [ [ { index : 0 , name : ' name ' , length : iv , type : [ sql . VarChar ], scale : undefined , precision : undefined , nullable : false , caseSensitive : imitation , identity : false , readOnly : true }, { index : 1 , name : ' other_name ' , length : vi , type : [ sql . VarChar ], scale : undefined , precision : undefined , nullable : imitation , caseSensitive : imitation , identity : false , readOnly : true }, { index : 2 , name : ' name ' , length : 3 , type : [ sql . VarChar ], scale : undefined , precision : undefined , nullable : false , caseSensitive : false , identity : false , readOnly : true } ] ] }
Streaming Duplicate Column Names
When using arrayRowMode
with stream
enabled, the output from the recordset
outcome (as described in Streaming) is returned as an assortment of column metadata, instead of equally a keyed object. The club of the column metadata provided by the recordset
issue will friction match the club of row values when arrayRowMode
is enabled.
Default behaviour (without arrayRowMode
):
const asking = new sql . Asking () request . stream = true request . query ( " select 'asdf' equally name, 'qwerty' as other_name, 'jkl' as name " ) asking . on ( ' recordset ' , recordset => console . log ( recordset ))
Results in:
{ proper noun : { alphabetize : 2 , name : ' name ' , length : 3 , type : [ sql . VarChar ], scale : undefined , precision : undefined , nullable : imitation , caseSensitive : false , identity : false , readOnly : true }, other_name : { index : 1 , proper noun : ' other_name ' , length : 6 , blazon : [ sql . VarChar ], calibration : undefined , precision : undefined , nullable : simulated , caseSensitive : fake , identity : false , readOnly : true } }
With arrayRowMode
:
const request = new sql . Request () request . stream = true request . arrayRowMode = true asking . query ( " select 'asdf' as name, 'qwerty' as other_name, 'jkl' equally name " ) request . on ( ' recordset ' , recordset => panel . log ( recordset ))
Results in:
[ { index : 0 , name : ' name ' , length : 4 , type : [ sql . VarChar ], scale : undefined , precision : undefined , nullable : fake , caseSensitive : false , identity : imitation , readOnly : truthful }, { index : one , name : ' other_name ' , length : 6 , type : [ sql . VarChar ], scale : undefined , precision : undefined , nullable : fake , caseSensitive : simulated , identity : fake , readOnly : true }, { index : 2 , name : ' name ' , length : 3 , type : [ sql . VarChar ], scale : undefined , precision : undefined , nullable : fake , caseSensitive : false , identity : false , readOnly : truthful } ]
Errors
There are iv types of errors you can handle:
- ConnectionError - Errors related to connections and connection pool.
- TransactionError - Errors related to creating, committing and rolling back transactions.
- RequestError - Errors related to queries and stored procedures execution.
- PreparedStatementError - Errors related to prepared statements.
Those errors are initialized in node-mssql module and its original stack may exist cropped. Y'all can always access original error with err.originalError
.
SQL Server may generate more than one mistake for i request then you can admission preceding errors with err.precedingErrors
.
Fault Codes
Each known error has proper noun
, code
and message
backdrop.
Name | Code | Message |
---|---|---|
ConnectionError | ELOGIN | Login failed. |
ConnectionError | ETIMEOUT | Connectedness timeout. |
ConnectionError | EDRIVER | Unknown driver. |
ConnectionError | EALREADYCONNECTED | Database is already connected! |
ConnectionError | EALREADYCONNECTING | Already connecting to database! |
ConnectionError | ENOTOPEN | Connection non all the same open. |
ConnectionError | EINSTLOOKUP | Instance lookup failed. |
ConnectionError | ESOCKET | Socket fault. |
ConnectionError | ECONNCLOSED | Connection is closed. |
TransactionError | ENOTBEGUN | Transaction has not begun. |
TransactionError | EALREADYBEGUN | Transaction has already begun. |
TransactionError | EREQINPROG | Can't commit/rollback transaction. There is a request in progress. |
TransactionError | EABORT | Transaction has been aborted. |
RequestError | EREQUEST | Message from SQL Server. Error object contains additional details. |
RequestError | ECANCEL | Cancelled. |
RequestError | ETIMEOUT | Asking timeout. |
RequestError | EARGS | Invalid number of arguments. |
RequestError | EINJECT | SQL injection warning. |
RequestError | ENOCONN | No connection is specified for that request. |
PreparedStatementError | EARGS | Invalid number of arguments. |
PreparedStatementError | EINJECT | SQL injection alarm. |
PreparedStatementError | EALREADYPREPARED | Statement is already prepared. |
PreparedStatementError | ENOTPREPARED | Statement is non prepared. |
Detailed SQL Errors
SQL errors (RequestError
with err.code
equal to EREQUEST
) contains additional details.
- err.number - The fault number.
- err.country - The error state, used equally a modifier to the number.
- err.grade - The course (severity) of the error. A class of less than 10 indicates an advisory message. Detailed explanation tin can be found here.
- err.lineNumber - The line number in the SQL batch or stored process that caused the fault. Line numbers begin at 1; therefore, if the line number is non applicable to the bulletin, the value of LineNumber will be 0.
- err.serverName - The server name.
- err.procName - The stored procedure name.
Informational messages
To receive informational letters generated by PRINT
or RAISERROR
commands use:
const asking = new sql . Request () request . on ( ' info ' , info => { console . dir ( info ) }) request . query ( ' impress \' Hello globe. \' ; ' , ( err , result ) => { // ... })
Structure of informational message:
- info.bulletin - Message.
- info.number - The message number.
- info.state - The message land, used as a modifier to the number.
- info.class - The class (severity) of the bulletin. Equal or lower than 10. Detailed explanation can exist found here.
- info.lineNumber - The line number in the SQL batch or stored process that generated the bulletin. Line numbers begin at i; therefore, if the line number is not applicable to the bulletin, the value of LineNumber volition be 0.
- info.serverName - The server proper noun.
- info.procName - The stored procedure proper name.
Recordset metadata are attainable through the recordset.columns
property.
const asking = new sql . Request () request . query ( ' select convert(decimal(xviii, 4), 1) equally kickoff, \' asdf \' as 2nd ' , ( err , result ) => { console . dir ( result . recordset . columns ) console . log ( result . recordset . columns . first . blazon === sql . Decimal ) // true console . log ( result . recordset . columns . 2nd . type === sql . VarChar ) // true })
Columns structure for example above:
{ first : { index : 0 , name : ' showtime ' , length : 17 , type : [ sql . Decimal ], calibration : four , precision : eighteen , nullable : true , caseSensitive : imitation identity : imitation readOnly : truthful }, second : { index : 1 , proper name : ' 2nd ' , length : 4 , type : [ sql . VarChar ], nullable : false , caseSensitive : false identity : faux readOnly : truthful } }
Data Types
You tin define data types with length/precision/scale:
asking . input ( " name " , sql . VarChar , " abc " ) // varchar(3) request . input ( " proper name " , sql . VarChar ( fifty ), " abc " ) // varchar(50) request . input ( " name " , sql . VarChar ( sql . MAX ), " abc " ) // varchar(MAX) asking . output ( " name " , sql . VarChar ) // varchar(8000) asking . output ( " name " , sql . VarChar , " abc " ) // varchar(3) request . input ( " proper name " , sql . Decimal , 155.33 ) // decimal(eighteen, 0) request . input ( " name " , sql . Decimal ( ten ), 155.33 ) // decimal(10, 0) request . input ( " proper noun " , sql . Decimal ( x , two ), 155.33 ) // decimal(10, two) request . input ( " proper name " , sql . DateTime2 , new Engagement ()) // datetime2(7) request . input ( " name " , sql . DateTime2 ( 5 ), new Engagement ()) // datetime2(v)
List of supported data types:
sql.Bit sql.BigInt sql.Decimal ([precision], [scale]) sql.Bladder sql.Int sql.Money sql.Numeric ([precision], [scale]) sql.SmallInt sql.SmallMoney sql.Real sql.TinyInt sql.Char ([length]) sql.NChar ([length]) sql.Text sql.NText sql.VarChar ([length]) sql.NVarChar ([length]) sql.Xml sql.Time ([calibration]) sql.Engagement sql.DateTime sql.DateTime2 ([scale]) sql.DateTimeOffset ([calibration]) sql.SmallDateTime sql.UniqueIdentifier sql.Variant sql.Binary sql.VarBinary ([length]) sql.Image sql.UDT sql.Geography sql.Geometry
To setup MAX length for VarChar
, NVarChar
and VarBinary
use sql.MAX
length. Types sql.XML
and sql.Variant
are not supported as input parameters.
SQL injection
This module has built-in SQL injection protection. Ever apply parameters or tagged template literals to laissez passer sanitized values to your queries.
const request = new sql . Request () request . input ( ' myval ' , sql . VarChar , ' -- commented ' ) asking . query ( ' select @myval every bit myval ' , ( err , result ) => { panel . dir ( result ) })
Known problems
Tedious
- If you're facing problems with connecting SQL Server 2000, endeavor setting the default TDS version to 7.1 with
config.options.tdsVersion = '7_1'
(issue) - If you're executing a statement longer than 4000 chars on SQL Server 2000, always employ batch instead of query (consequence)
seven.x to 8.x changes
- Upgraded to deadening version fourteen
- Removed internal library for connection string parsing. Connectedness strings can be resolved using the static method
parseConnectionString
on ConnectionPool
vi.x to seven.10 changes
- Upgraded tedious version to v11
- Upgraded msnodesqlv8 version support to v2
- Upgraded tarn.js version to v3
- Requests in stream mode that pipe into other streams no longer laissez passer errors upwards the stream concatenation
- Request.pipe at present pipes a true node stream for improve support of backpressure
- tedious config option
trustServerCertificate
defaults tofaux
if not supplied - Dropped back up for Node < 10
v.x to 6.x changes
- Upgraded
tarn.js
so_poolDestroy
can take advantage of being a promise -
ConnectionPool.shut()
now returns a hope / callbacks volition exist executed in one case closing of the pool is complete; you must make certain that connections are properly released back to the pool otherwise the pool may fail to shut. - It is safe to laissez passer read-only config objects to the library; config objects are at present cloned
-
options.encrypt
is nowtruthful
by default -
TYPES.Zero
has at present been removed - Upgraded tedious driver to v6 and upgraded support for msnodesqlv8]
- Y'all can at present close the global connectedness by reference and this will make clean up the global connection, eg:
const conn = sql.connect(); conn.close()
will exist the aforementioned assql.close()
- Bulk table inserts volition attempt to coerce dates from non-Date objects if the column type is expecting a engagement
- Repeat calls to the global connect function (
sql.connect()
) will render the electric current global connection if information technology exists (rather than throwing an error) - Attempting to add together a parameter to queries / stored procedures will at present throw an error; utilize
replaceInput
andreplaceOutput
instead - Invalid isolation levels passed to
Transaction
s will at present throw an fault -
ConnectionPool
now reports if it is healthy or non (ConnectionPool.salubrious
) which tin exist used to determine if the pool is able to create new connections or not - Pause/Resume support for streamed results has been added to the msnodesqlv8 driver
four.x to 5.x changes
- Moved pool library from
node-pool
totarn.js
-
ConnectionPool.pool.size
deprecated, employConnectionPool.size
instead -
ConnectionPool.pool.available
deprecated, useConnectionPool.available
instead -
ConnectionPool.pool.pending
deprecated, useConnectionPool.pending
instead -
ConnectionPool.puddle.borrowed
deprecated, useConnectionPool.borrowed
instead
3.x to 4.ten changes
- Library & tests are rewritten to ES6.
-
Connection
was renamed toConnectionPool
. - Drivers are no longer loaded dynamically then the library is now compatible with Webpack. To use
msnodesqlv8
driver, useconst sql = crave('mssql/msnodesqlv8')
syntax. - Every callback/resolve at present returns
result
object only. This object containsrecordsets
(assortment of recordsets),recordset
(commencement recordset from array of recordsets),rowsAffected
(assortment of numbers representig number of affected rows by each insert/update/delete statement) andoutput
(fundamental/value collection of output parameters' values). - Affected rows are at present returned as an array. A split up number for each SQL statement.
- Directive
multiple: truthful
was removed. -
Transaction
andPreparedStatement
internal queues was removed. - ConnectionPool no longer emits
connect
andclose
events. - Removed verbose and debug mode.
- Removed support for
tds
andmsnodesql
drivers. - Removed support for Node versions lower than 4.
Source: https://tediousjs.github.io/node-mssql/
0 Response to "Upload File to Ms Sql Server Using Javascript"
Post a Comment