diff --git a/doc/CommandReference.html b/doc/CommandReference.html index 647c1b0c..2b3ec0d3 100644 --- a/doc/CommandReference.html +++ b/doc/CommandReference.html @@ -32,9 +32,9 @@
Append an element to the tail of the List value at key
Append an element to the head of the List value at key
Return the length of the List value at key
Return a range of elements from the List at key
Trim the list at key to the specified range of elements
Return the element at index position from the List at key
Set a new value as the element at index position of the List at key
Remove the first-N, last-N, or all the elements matching value from the List at key
Return and remove (atomically) the first element of the List at key
Return and remove (atomically) the last element of the List at key
Blocking LPOP
Blocking RPOP
Return and remove (atomically) the last element of the source List stored at _srckey_ and push the same element to the destination List stored at _dstkey_
Add the specified member to the Set value at key
Remove the specified member from the Set value at key
Remove and return (pop) a random element from the Set value at key
Move the specified member from one Set to another atomically
Return the number of elements (the cardinality) of the Set at key
Test if the specified value is a member of the Set at key
Return the intersection between the Sets stored at key1, key2, ..., keyN
Compute the intersection between the Sets stored at key1, key2, ..., keyN, and store the resulting Set at dstkey
Return the union between the Sets stored at key1, key2, ..., keyN
Compute the union between the Sets stored at key1, key2, ..., keyN, and store the resulting Set at dstkey
Return the difference between the Set stored at key1 and all the Sets key2, ..., keyN
Compute the difference between the Set key1 and all the Sets key2, ..., keyN, and store the resulting Set at dstkey
Return all the members of the Set value at key
Return a random member of the Set value at key
Add the specified member to the Sorted Set value at key or update the score if it already exist
Remove the specified member from the Sorted Set value at key
If the member already exists increment its score by _increment_, otherwise add the member setting _increment_ as score
Return the rank (or index) or _member_ in the sorted set at _key_, with scores being ordered from low to high
Return the rank (or index) or _member_ in the sorted set at _key_, with scores being ordered from high to low
Return a range of elements from the sorted set at key
Return a range of elements from the sorted set at key, exactly like ZRANGE, but the sorted set is ordered in traversed in reverse order, from the greatest to the smallest score
Return all the elements with score >= min and score <= max (a range query) from the sorted set
Return the cardinality (number of elements) of the sorted set at key
Return the score associated with the specified element of the sorted set at key
Remove all the elements with rank >= min and rank <= max from the sorted set
Remove all the elements with score >= min and score <= max from the sorted set
Perform a union or intersection over a number of sorted sets with optional weight and aggregate
Set the hash field to the specified value. Creates the hash if needed.
Retrieve the value of the specified hash field.
Set the hash fields to their respective values.
Increment the integer value of the hash at _key_ on _field_ with _integer_.
Test for existence of a specified field in a hash
Remove the specified field from a hash
Return the number of items in a hash.
Return all the fields in a hash.
Return all the values in a hash.
Return all the fields and associated values in a hash.
Set the hash field to the specified value. Creates the hash if needed.
Retrieve the value of the specified hash field.
Get the hash values assoicated to the specified fields.
Set the hash fields to their respective values.
Increment the integer value of the hash at _key_ on _field_ with _integer_.
Test for existence of a specified field in a hash
Remove the specified field from a hash
Return the number of items in a hash.
Return all the fields in a hash.
Return all the values in a hash.
Return all the fields and associated values in a hash.
Sort a Set or a List accordingly to the specified parameters
Redis atomic transactions
Redis atomic transactions
Redis Public/Subscribe messaging paradigm implementation
Synchronously save the DB on disk
Asynchronously save the DB on disk
Return the UNIX time stamp of the last successfully saving of the dataset on disk
Synchronously save the DB on disk, then shutdown the server
Rewrite the append only file in background when it gets too big
Provide information and statistics about the server
Dump all the received requests in real time
Change the replication settings
Configure a Redis server at runtime
Remove the specified keys. If a given key does not existno operation is performed for this key. The commnad returns the number ofkeys removed.+Time complexity: O(1)
Remove the specified keys. If a given key does not existno operation is performed for this key. The command returns the number ofkeys removed.
an integer greater than 0 if one or more keys were removed 0 if none of the specified key existed-
What happened here is that lpush against the key with a timeout set deletedthe key before to perform the operation. There is so a simple rule, writeoperations against volatile keys will destroy the key before to perform theoperation. Why Redis uses this behavior? In order to retain an importantproperty: a server that receives a given number of commands in the samesequence will end with the same dataset in memory. Without the delete-on-writesemantic what happens is that the state of the server depends on the timeof the commands to. This is not a desirable property in a distributed databasethat supports replication.-
Trying to call EXPIRE against a key that already has an associated timeoutwill not change the timeout of the key, but will just return 0. If insteadthe key does not have a timeout associated the timeout will be set and EXPIREwill return 1.+
What happened here is that LPUSH against the key with a timeout set deletedthe key before to perform the operation. There is so a simple rule, writeoperations against volatile keys will destroy the key before to perform theoperation. Why Redis uses this behavior? In order to retain an importantproperty: a server that receives a given number of commands in the samesequence will end with the same dataset in memory. Without the delete-on-writesemantic what happens is that the state of the server depends on the timethe commands were issued. This is not a desirable property in a distributed databasethat supports replication.+
Trying to call EXPIRE against a key that already has an associated timeoutwill not change the timeout of the key, but will just return 0. If insteadthe key does not have a timeout associated the timeout will be set and EXPIREwill return 1.
Redis does not constantly monitor keys that are going to be expired.Keys are expired simply when some client tries to access a key, andthe key is found to be timed out.
Of course this is not enough as there are expired keys that will neverbe accessed again. This keys should be expired anyway, so once everysecond Redis test a few keys at random among keys with an expire set.All the keys that are already expired are deleted from the keyspace.
Each time a fixed number of keys where tested (100 by default). So ifyou had a client setting keys with a very short expire faster than 100for second the memory continued to grow. When you stopped to insertnew keys the memory started to be freed, 100 keys every second in thebest conditions. Under a peak Redis continues to use more and more RAMeven if most keys are expired in each sweep.diff --git a/doc/FAQ.html b/doc/FAQ.html index 7c012b2c..531fb708 100644 --- a/doc/FAQ.html +++ b/doc/FAQ.html @@ -58,7 +58,7 @@ Redis for the same objects. This happens because when data is in memory is full of pointers, reference counters and other metadata. Add to this malloc fragmentation and need to return word-aligned chunks of memory and you have a clear picture of what happens. So this means to -have 10 times the I/O between memory and disk than otherwise needed.
echo 1 > /proc/sys/vm/overcommit_memory
:)overcommit_memory
setting is set to zero fork will fail unless there is as much free RAM as required to really duplicate all the parent memory pages, with the result that if you have a Redis dataset of 3 GB and just 2 GB of free memory it will fail.overcommit_memory
to 1 says Linux to relax and perform the fork in a more optimistic allocation fashion, and this is indeed what you want for Redis.echo 1 > /proc/sys/vm/overcommit_memory
:)overcommit_memory
setting is set to zero fork will fail unless there is as much free RAM as required to really duplicate all the parent memory pages, with the result that if you have a Redis dataset of 3 GB and just 2 GB of free memory it will fail.overcommit_memory
to 1 says Linux to relax and perform the fork in a more optimistic allocation fashion, and this is indeed what you want for Redis.overcommit_memory
and overcommit_ratio
is this classic from Red Hat Magaize, "Understanding Virtual Memory": http://www.redhat.com/magazine/001nov04/features/vm/ the slow commands that may ruin the DB performance if not usedwith care*.
In other words this command is intended only for debugging and *special* operations like creating a script to change the DB schema. Don't use it in your normal code. Use Redis Sets in order to group together a subset of objects.Glob style patterns examples: -
* h?llo will match hello hallo hhllo* h*llo will match hllo heeeello* hUse \ to escape special chars if you want to match them verbatim.[
ae]
llo will match hello and hallo, but not hillo
* h?llo will match hello hallo hhllo* h*llo will match hllo heeeello* hUse \ to escape special chars if you want to match them verbatim.[
ae]
llo will match hello and hallo, but not hillo
Return the specified element of the list stored at the specifiedkey. 0 is the first element, 1 the second and so on. Negative indexesare supported, for example -1 is the last element, -2 the penultimateand so on.-
If the value stored at key is not of list type an error is returned.If the index is out of range an empty string is returned.+
If the value stored at key is not of list type an error is returned.If the index is out of range a 'nil' reply is returned.
Note that even if the average time complexity is O(n) asking forthe first or the last element of the list is O(1).
MULTI, EXEC and DISCARD commands are the fundation of Redis Transactions.A Redis Transaction allows to execute a group of Redis commands in a singlestep, with two important guarantees:-
A Redis transaction is entered using the MULTI command. The command alwaysreplies with OK. At this point the user can issue multiple commands. Insteadto execute this commands Redis will "queue" them. All the commands areexecuted once EXEC is called.-
Calling DISCARD instead will flush the transaction queue and will exitthe transaction.-
The following is an example using the Ruby client:
+EXEC or DISCARD
MULTI, EXEC, DISCARD and WATCH commands are the fundation of Redis Transactions. +A Redis Transaction allows the execution of a group of Redis commands in a single +step, with two important guarantees:
?> r.multi => "OK" >> r.incr "foo" @@ -46,9 +52,14 @@ >> r.exec => [1, 1, 2]-
As it is possible to see from the session above, MULTI returns an "array" ofreplies, where every element is the reply of a single command in thetransaction, in the same order the commands were queued.-
When a Redis connection is in the context of a MULTI request, all the commandswill reply with a simple string "QUEUED" if they are correct from thepoint of view of the syntax and arity (number of arguments) of the commaand.Some command is still allowed to fail during execution time.-
This is more clear if at protocol level: in the following example one commandwill fail when executed even if the syntax is right:
+As it is possible to see from the session above, MULTI returns an "array" of +replies, where every element is the reply of a single command in the +transaction, in the same order the commands were queued.
When a Redis connection is in the context of a MULTI request, all the commands +will reply with a simple string "QUEUED" if they are correct from the +point of view of the syntax and arity (number of arguments) of the commaand. +Some command is still allowed to fail during execution time.
This is more clear if at protocol level: in the following example one command +will fail when executed even if the syntax is right: +Trying 127.0.0.1... Connected to localhost. Escape character is '^]'. @@ -64,16 +75,21 @@ EXEC +OK -ERR Operation against a key holding the wrong kind of value-MULTI returned a two elements bulk reply in witch one of this is a +OKcode and one is a -ERR reply. It's up to the client lib to find a sensibleway to provide the error to the user.-IMPORTANT: even when a command will raise an error, all the other commandsin the queue will be processed. Redis will NOT stop the processing ofcommands once an error is found.-Another example, again using the write protocol with telnet, shows howsyntax errors are reported ASAP instead:+MULTI returned a two elements bulk reply in witch one of this is a +OK +code and one is a -ERR reply. It's up to the client lib to find a sensible +way to provide the error to the user.IMPORTANT: even when a command will raise an error, all the other commandsin the queue will be processed. Redis will NOT stop the processing ofcommands once an error is found.+Another example, again using the write protocol with telnet, shows how +syntax errors are reported ASAP instead: +MULTI +OK INCR a b c -ERR wrong number of arguments for 'incr' command-This time due to the syntax error the "bad" INCR command is not queuedat all.-The DISCARD command
DISCARD can be used in order to abort a transaction. No command will beexecuted, and the state of the client is again the normal one, outsideof a transaction. Example using the Ruby client:+This time due to the syntax error the "bad" INCR command is not queued +at all.The DISCARD command
DISCARD can be used in order to abort a transaction. No command will be +executed, and the state of the client is again the normal one, outside +of a transaction. Example using the Ruby client:?> r.set("foo",1) => true >> r.multi @@ -84,9 +100,64 @@ INCR a b c => "OK" >> r.get("foo") => "1" -Return value
Multi bulk reply, specifically:-The result of a MULTI/EXEC command is a multi bulk reply where every element is the return value of every command in the atomic transaction. +Check and Set (CAS) transactions using WATCH
WATCH is used in order to provide a CAS (Check and Set) behavior to +Redis Transactions.
WATCHed keys are monitored in order to detect changes against this keys. +If at least a watched key will be modified before the EXEC call, the +whole transaction will abort, and EXEC will return a nil object +(A Null Multi Bulk reply) to notify that the transaction failed.
For example imagine we have the need to atomically increment the value +of a key by 1 (I know we have INCR, let's suppose we don't have it).
The first try may be the following: ++val = GET mykey +val = val + 1 +SET mykey $val+This will work reliably only if we have a single client performing the operation in a given time. +If multiple clients will try to increment the key about at the same time +there will be a race condition. For instance client A and B will read the +old value, for instance, 10. The value will be incremented to 11 by both +the clients, and finally SET as the value of the key. So the final value +will be "11" instead of "12".
Thanks to WATCH we area able to model the problem very well: ++WATCH mykey +val = GET mykey +val = val + 1 +MULTI +SET mykey $val +EXEC ++Using the above code, if there are race conditions and another client +modified the result of val in the time between our call to WATCH and +our call to EXEC, the transaction will fail.
We'll have just to re-iterate the operation hoping this time we'll not get +a new race. This form of locking is called optimistic locking and is +a very powerful form of locking as in many problems there are multiple +clients accessing a much bigger number of keys, so it's very unlikely that +there are collisions: usually operations don't need to be performed +multiple times.WATCH explained
So what is WATCH really about? It is a command that will make the EXEC +conditional: we are asking Redis to perform the transaction only if all +the values WATCHed are still the same. Otherwise the transaction is not +entered at all.
WATCH can be called multiple times. Simply all the WATCH calls will +have the effects to watch for changes starting from the call, up to the +moment EXEC is called.
When EXEC is called, either if it will fail or succeed, all keys are +UNWATCHed. Also when a client connection is closed, everything gets +UNWATCHed.
It is also possible to use the UNWATCH command (without arguments) in order +to flush all the watched keys. Sometimes this is useful as we +optimistically lock a few keys, since possibly we need to perform a transaction +to alter those keys, but after reading the current content of the keys +we don't want to proceed. When this happens we just call UNWATCH so that +the connection can already be used freely for new transactions.WATCH used to implement ZPOP
A good example to illustrate how WATCH can be used to create new atomic +operations otherwise not supported by Redis is to implement ZPOP, that is +a command that pops the element with the lower score from a sorted set +in an atomic way. This is the simplest implementation: ++WATCH zset +ele = ZRANGE zset 0 0 +MULTI +ZREM zset ele +EXEC ++If EXEC fails (returns a nil value) we just re-iterate the operation.Return value
Multi bulk reply, specifically:+The result of a MULTI/EXEC command is a multi bulk reply where every element is the return value of every command in the atomic transaction. +If a MULTI/EXEC transaction is aborted because of WATCH detected modified keys, a Null Multi Bulk reply is returned.
If member already exists in the sorted set adds the increment to its scoreand updates the position of the element in the sorted set accordingly.If member does not already exist in the sorted set it is added with_increment_ as score (that is, like if the previous score was virtually zero).If key does not exist a new sorted set with the specified_member_ as sole member is crated. If the key exists but does not hold asorted set value an error is returned.
The score value can be the string representation of a double precision floatingpoint number. It's possible to provide a negative value to perform a decrement.
For an introduction to sorted sets check the Introduction to Redis data types page.-
-The score of the member after the increment is performed. +Return value
Bulk reply+The new score (a double precision floating point number) represented as string.diff --git a/doc/ZunionCommand.html b/doc/ZunionCommand.html index edb52a9c..cb5b844d 100644 --- a/doc/ZunionCommand.html +++ b/doc/ZunionCommand.html @@ -16,7 +16,7 @@-ZunionCommand: Contents
ZUNION / ZINTER _dstkey_ _N_ _k1_ ... _kN_ `[`WEIGHTS _w1_ ... _wN_`]` `[`AGGREGATE SUM|MIN|MAX`]` (Redis >
Return value +ZunionCommand: Contents
ZUNIONSTORE _dstkey_ _N_ _k1_ ... _kN_ `[`WEIGHTS _w1_ ... _wN_`]` `[`AGGREGATE SUM|MIN|MAX`]` (Redis >
ZINTERSTORE _dstkey_ _N_ _k1_ ... _kN_ `[`WEIGHTS _w1_ ... _wN_`]` `[`AGGREGATE SUM|MIN|MAX`]` (Redis >
Return valueZunionCommand
@@ -27,8 +27,9 @@-ZUNION / ZINTER _dstkey_ _N_ _k1_ ... _kN_ `[`WEIGHTS _w1_ ... _wN_`]` `[`AGGREGATE SUM|MIN|MAX`]` (Redis >
1.3.5) =
Time complexity: O(N) + O(M log(M)) with N being the sum of the sizes of the input sorted sets, and M being the number of elements in the resulting sorted setCreates a union or intersection of N sorted sets given by keys k1 through kN, and stores it at dstkey. It is mandatory to provide the number of input keys N, before passing the input keys and the other (optional) arguments.-As the terms imply, the ZINTER command requires an element to be present in each of the given inputs to be inserted in the result. The ZUNION command inserts all elements across all inputs.+ZUNIONSTORE _dstkey_ _N_ _k1_ ... _kN_ `[`WEIGHTS _w1_ ... _wN_`]` `[`AGGREGATE SUM|MIN|MAX`]` (Redis >
1.3.12) = +ZINTERSTORE _dstkey_ _N_ _k1_ ... _kN_ `[`WEIGHTS _w1_ ... _wN_`]` `[`AGGREGATE SUM|MIN|MAX`]` (Redis >
1.3.12) =
Time complexity: O(N) + O(M log(M)) with N being the sum of the sizes of the input sorted sets, and M being the number of elements in the resulting sorted setCreates a union or intersection of N sorted sets given by keys k1 through kN, and stores it at dstkey. It is mandatory to provide the number of input keys N, before passing the input keys and the other (optional) arguments.+As the terms imply, the ZINTERSTORE command requires an element to be present in each of the given inputs to be inserted in the result. The ZUNIONSTORE command inserts all elements across all inputs.Using the WEIGHTS option, it is possible to add weight to each input sorted set. This means that the score of each element in the sorted set is first multiplied by this weight before being passed to the aggregation. When this option is not given, all weights default to 1.With the AGGREGATE option, it's possible to specify how the results of the union or intersection are aggregated. This option defaults to SUM, where the score of an element is summed across the inputs where it exists. When this option is set to be either MIN or MAX, the resulting set will contain the minimum or maximum score of an element across the inputs where it exists.Return value
Integer reply, specifically the number of elements in the sorted set at dstkey. diff --git a/doc/ZunionstoreCommand.html b/doc/ZunionstoreCommand.html index a9f74326..862c38bb 100644 --- a/doc/ZunionstoreCommand.html +++ b/doc/ZunionstoreCommand.html @@ -16,7 +16,7 @@-ZunionstoreCommand: Contents
ZUNION / ZINTER _dstkey_ _N_ _k1_ ... _kN_ `[`WEIGHTS _w1_ ... _wN_`]` `[`AGGREGATE SUM|MIN|MAX`]` (Redis >
Return value +ZunionstoreCommand: Contents
ZUNIONSTORE _dstkey_ _N_ _k1_ ... _kN_ `[`WEIGHTS _w1_ ... _wN_`]` `[`AGGREGATE SUM|MIN|MAX`]` (Redis >
ZINTERSTORE _dstkey_ _N_ _k1_ ... _kN_ `[`WEIGHTS _w1_ ... _wN_`]` `[`AGGREGATE SUM|MIN|MAX`]` (Redis >
Return valueZunionstoreCommand
@@ -27,8 +27,10 @@-ZUNION / ZINTER _dstkey_ _N_ _k1_ ... _kN_ `[`WEIGHTS _w1_ ... _wN_`]` `[`AGGREGATE SUM|MIN|MAX`]` (Redis >
1.3.5) =
Time complexity: O(N) + O(M log(M)) with N being the sum of the sizes of the input sorted sets, and M being the number of elements in the resulting sorted setCreates a union or intersection of N sorted sets given by keys k1 through kN, and stores it at dstkey. It is mandatory to provide the number of input keys N, before passing the input keys and the other (optional) arguments.-As the terms imply, the ZINTER command requires an element to be present in each of the given inputs to be inserted in the result. The ZUNION command inserts all elements across all inputs.+ZUNIONSTORE _dstkey_ _N_ _k1_ ... _kN_ `[`WEIGHTS _w1_ ... _wN_`]` `[`AGGREGATE SUM|MIN|MAX`]` (Redis >
1.3.12) = +ZINTERSTORE _dstkey_ _N_ _k1_ ... _kN_ `[`WEIGHTS _w1_ ... _wN_`]` `[`AGGREGATE SUM|MIN|MAX`]` (Redis >
1.3.12) = +Time complexity: O(N) + O(M log(M)) with N being the sum of the sizes of the input sorted sets, and M being the number of elements in the resulting sorted setCreates a union or intersection of N sorted sets given by keys k1 through kN, and stores it at dstkey. It is mandatory to provide the number of input keys N, before passing the input keys and the other (optional) arguments.+As the terms imply, the ZINTERSTORE command requires an element to be present in each of the given inputs to be inserted in the result. The ZUNIONSTORE command inserts all elements across all inputs.Using the WEIGHTS option, it is possible to add weight to each input sorted set. This means that the score of each element in the sorted set is first multiplied by this weight before being passed to the aggregation. When this option is not given, all weights default to 1.With the AGGREGATE option, it's possible to specify how the results of the union or intersection are aggregated. This option defaults to SUM, where the score of an element is summed across the inputs where it exists. When this option is set to be either MIN or MAX, the resulting set will contain the minimum or maximum score of an element across the inputs where it exists.Return value
Integer reply, specifically the number of elements in the sorted set at dstkey.