class AWS::S3::S3Object

S3Objects represent the data you store on S3. They have a key (their name) and a value (their data). All objects belong to a bucket.

You can store an object on S3 by specifying a key, its data and the name of the bucket you want to put it in:

S3Object.store('me.jpg', open('headshot.jpg'), 'photos')

The content type of the object will be inferred by its extension. If the appropriate content type can not be inferred, S3 defaults to binary/octet-stream.

If you want to override this, you can explicitly indicate what content type the object should have with the :content_type option:

file = 'black-flowers.m4a'
S3Object.store(
  file,
  open(file),
  'jukebox',
  :content_type => 'audio/mp4a-latm'
)

You can read more about storing files on S3 in the documentation for S3Object.store.

If you just want to fetch an object you've stored on S3, you just specify its name and its bucket:

picture = S3Object.find 'headshot.jpg', 'photos'

N.B. The actual data for the file is not downloaded in both the example where the file appeared in the bucket and when fetched directly. You get the data for the file like this:

picture.value

You can fetch just the object's data directly:

S3Object.value 'headshot.jpg', 'photos'

Or stream it by passing a block to stream:

open('song.mp3', 'w') do |file|
  S3Object.stream('song.mp3', 'jukebox') do |chunk|
    file.write chunk
  end
end

The data of the file, once download, is cached, so subsequent calls to value won't redownload the file unless you tell the object to reload its value:

# Redownloads the file's data
song.value(:reload)

Other functionality includes:

# Check if an object exists?
S3Object.exists? 'headshot.jpg', 'photos'

# Copying an object
S3Object.copy 'headshot.jpg', 'headshot2.jpg', 'photos'

# Renaming an object
S3Object.rename 'headshot.jpg', 'portrait.jpg', 'photos'

# Deleting an object
S3Object.delete 'headshot.jpg', 'photos'

More about objects and their metadata

You can find out the content type of your object with the content_type method:

song.content_type
# => "audio/mpeg"

You can change the content type as well if you like:

song.content_type = 'application/pdf'
song.store

(Keep in mind that due to limitations in S3's exposed API, the only way to change things like the content_type is to PUT the object onto S3 again. In the case of large files, this will result in fully re-uploading the file.)

A bevy of information about an object can be had using the about method:

pp song.about
{"last-modified"    => "Sat, 28 Oct 2006 21:29:26 GMT",
 "content-type"     => "binary/octet-stream",
 "etag"             => "\"dc629038ffc674bee6f62eb64ff3a\"",
 "date"             => "Sat, 28 Oct 2006 21:30:41 GMT",
 "x-amz-request-id" => "B7BC68F55495B1C8",
 "server"           => "AmazonS3",
 "content-length"   => "3418766"}

You can get and set metadata for an object:

song.metadata
# => {}
song.metadata[:album] = "A River Ain't Too Much To Love"
# => "A River Ain't Too Much To Love"
song.metadata[:released] = 2005
pp song.metadata
{"x-amz-meta-released" => 2005, 
  "x-amz-meta-album"   => "A River Ain't Too Much To Love"}
song.store

That metadata will be saved in S3 and is hence forth available from that object:

song = S3Object.find('black-flowers.mp3', 'jukebox')
pp song.metadata
{"x-amz-meta-released" => "2005", 
  "x-amz-meta-album"   => "A River Ain't Too Much To Love"}
song.metadata[:released]
# => "2005"
song.metadata[:released] = 2006
pp song.metadata
{"x-amz-meta-released" => 2006, 
 "x-amz-meta-album"    => "A River Ain't Too Much To Love"}

Public Class Methods

about(key, bucket = nil, options = {}) click to toggle source

Fetch information about the object with key from bucket. Information includes content type, content length, last modified time, and others.

If the specified key does not exist, NoSuchKey is raised.

    # File lib/aws/s3/object.rb
202 def about(key, bucket = nil, options = {})
203   response = head(path!(bucket, key, options), options)
204   raise NoSuchKey.new("No such key `#{key}'", bucket) if response.code == 404
205   About.new(response.headers)
206 end
copy(key, copy_key, bucket = nil, options = {}) click to toggle source

Makes a copy of the object with key to copy_key, preserving the ACL of the existing object if the :copy_acl option is true (default false).

    # File lib/aws/s3/object.rb
182 def copy(key, copy_key, bucket = nil, options = {})
183   bucket          = bucket_name(bucket)
184   source_key      = path!(bucket, key)
185   default_options = {'x-amz-copy-source' => source_key}
186   target_key      = path!(bucket, copy_key)
187   returning put(target_key, default_options.merge(options)) do
188     acl(copy_key, bucket, acl(key, bucket)) if options[:copy_acl]
189   end
190 end
create(key, data, bucket = nil, options = {})
Alias for: store
delete(key, bucket = nil, options = {}) click to toggle source

Delete object with key from bucket.

Calls superclass method
    # File lib/aws/s3/object.rb
220 def delete(key, bucket = nil, options = {})
221   # A bit confusing. Calling super actually makes an HTTP DELETE request. The delete method is
222   # defined in the Base class. It happens to have the same name.
223   super(path!(bucket, key, options), options).success?
224 end
exists?(key, bucket = nil) click to toggle source

Checks if the object with key in bucket exists.

S3Object.exists? 'kiss.jpg', 'marcel'
# => true
    # File lib/aws/s3/object.rb
212 def exists?(key, bucket = nil)
213   about(key, bucket)
214   true
215 rescue NoSuchKey
216   false
217 end
find(key, bucket = nil) click to toggle source

Returns the object whose key is name in the specified bucket. If the specified key does not exist, a NoSuchKey exception will be raised.

    # File lib/aws/s3/object.rb
145 def find(key, bucket = nil)
146   # N.B. This is arguably a hack. From what the current S3 API exposes, when you retrieve a bucket, it
147   # provides a listing of all the files in that bucket (assuming you haven't limited the scope of what it returns).
148   # Each file in the listing contains information about that file. It is from this information that an S3Object is built.
149   #
150   # If you know the specific file that you want, S3 allows you to make a get request for that specific file and it returns
151   # the value of that file in its response body. This response body is used to build an S3Object::Value object.
152   # If you want information about that file, you can make a head request and the headers of the response will contain
153   # information about that file. There is no way, though, to say, give me the representation of just this given file the same
154   # way that it would appear in a bucket listing.
155   #
156   # When fetching a bucket, you can provide options which narrow the scope of what files should be returned in that listing.
157   # Of those options, one is <tt>marker</tt> which is a string and instructs the bucket to return only object's who's key comes after
158   # the specified marker according to alphabetic order. Another option is <tt>max-keys</tt> which defaults to 1000 but allows you
159   # to dictate how many objects should be returned in the listing. With a combination of <tt>marker</tt> and <tt>max-keys</tt> you can
160   # *almost* specify exactly which file you'd like it to return, but <tt>marker</tt> is not inclusive. In other words, if there is a bucket
161   # which contains three objects who's keys are respectively 'a', 'b' and 'c', then fetching a bucket listing with marker set to 'b' will only
162   # return 'c', not 'b'.
163   #
164   # Given all that, my hack to fetch a bucket with only one specific file, is to set the marker to the result of calling String#previous on
165   # the desired object's key, which functionally makes the key ordered one degree higher than the desired object key according to
166   # alphabetic ordering. This is a hack, but it should work around 99% of the time. I can't think of a scenario where it would return
167   # something incorrect.
168   
169   # We need to ensure the key doesn't have extended characters but not uri escape it before doing the lookup and comparing since if the object exists,
170   # the key on S3 will have been normalized
171   key    = key.remove_extended unless key.valid_utf8?
172   bucket = Bucket.find(bucket_name(bucket), :marker => key.previous, :max_keys => 1)
173   # If our heuristic failed, trigger a NoSuchKey exception
174   if (object = bucket.objects.first) && object.key == key
175     object 
176   else 
177     raise NoSuchKey.new("No such key `#{key}'", bucket)
178   end
179 end
new(attributes = {}) { |self| ... } click to toggle source

Initializes a new S3Object.

Calls superclass method
    # File lib/aws/s3/object.rb
420 def initialize(attributes = {}, &block)
421   super
422   self.value  = attributes.delete(:value) 
423   self.bucket = attributes.delete(:bucket)
424   yield self if block_given?
425 end
rename(from, to, bucket = nil, options = {}) click to toggle source

Rename the object with key from to have key in to.

    # File lib/aws/s3/object.rb
193 def rename(from, to, bucket = nil, options = {})
194   copy(from, to, bucket, options)
195   delete(from, bucket)
196 end
save(key, data, bucket = nil, options = {})
Alias for: store
store(key, data, bucket = nil, options = {}) click to toggle source

When storing an object on the S3 servers using S3Object.store, the data argument can be a string or an I/O stream. If data is an I/O stream it will be read in segments and written to the socket incrementally. This approach may be desirable for very large files so they are not read into memory all at once.

# Non streamed upload
S3Object.store('greeting.txt', 'hello world!', 'marcel')

# Streamed upload
S3Object.store('roots.mpeg', open('roots.mpeg'), 'marcel')
    # File lib/aws/s3/object.rb
235 def store(key, data, bucket = nil, options = {})
236   validate_key!(key)
237   # Must build path before infering content type in case bucket is being used for options
238   path = path!(bucket, key, options)
239   infer_content_type!(key, options)
240   
241   put(path, options, data) # Don't call .success? on response. We want to get the etag.
242 end
Also aliased as: create, save
stream(key, bucket = nil, options = {}, &block) click to toggle source
    # File lib/aws/s3/object.rb
137 def stream(key, bucket = nil, options = {}, &block)
138   value(key, bucket, options) do |response|
139     response.read_body(&block)
140   end
141 end
url_for(name, bucket = nil, options = {}) click to toggle source

All private objects are accessible via an authenticated GET request to the S3 servers. You can generate an authenticated url for an object like this:

S3Object.url_for('beluga_baby.jpg', 'marcel_molina')

By default authenticated urls expire 5 minutes after they were generated.

Expiration options can be specified either with an absolute time since the epoch with the :expires options, or with a number of seconds relative to now with the :expires_in options:

# Absolute expiration date
# (Expires January 18th, 2038)
doomsday = Time.mktime(2038, 1, 18).to_i
S3Object.url_for('beluga_baby.jpg', 
                 'marcel', 
                 :expires => doomsday)

# Expiration relative to now specified in seconds
# (Expires in 3 hours)
S3Object.url_for('beluga_baby.jpg', 
                 'marcel', 
                 :expires_in => 60 * 60 * 3)

You can specify whether the url should go over SSL with the :use_ssl option:

# Url will use https protocol
S3Object.url_for('beluga_baby.jpg', 
                 'marcel', 
                 :use_ssl => true)

By default, the ssl settings for the current connection will be used.

If you have an object handy, you can use its url method with the same objects:

song.url(:expires_in => 30)

To get an unauthenticated url for the object, such as in the case when the object is publicly readable, pass the :authenticated option with a value of false.

S3Object.url_for('beluga_baby.jpg',
                 'marcel',
                 :authenticated => false)
# => http://s3.amazonaws.com/marcel/beluga_baby.jpg
    # File lib/aws/s3/object.rb
290 def url_for(name, bucket = nil, options = {})
291   connection.url_for(path!(bucket, name, options), options) # Do not normalize options
292 end
value(key, bucket = nil, options = {}, &block) click to toggle source

Returns the value of the object with key in the specified bucket.

Conditional GET options

  • :if_modified_since - Return the object only if it has been modified since the specified time, otherwise return a 304 (not modified).

  • :if_unmodified_since - Return the object only if it has not been modified since the specified time, otherwise raise PreconditionFailed.

  • :if_match - Return the object only if its entity tag (ETag) is the same as the one specified, otherwise raise PreconditionFailed.

  • :if_none_match - Return the object only if its entity tag (ETag) is different from the one specified, otherwise return a 304 (not modified).

Other options

  • :range - Return only the bytes of the object in the specified range.

    # File lib/aws/s3/object.rb
133 def value(key, bucket = nil, options = {}, &block)
134   Value.new(get(path!(bucket, key, options), options, &block))
135 end

Private Class Methods

infer_content_type!(key, options) click to toggle source
    # File lib/aws/s3/object.rb
309 def infer_content_type!(key, options)
310   return if options.has_key?(:content_type)
311   if mime_type = MIME::Types.type_for(key).first
312     options[:content_type] = mime_type.content_type
313   end
314 end
validate_key!(key) click to toggle source
    # File lib/aws/s3/object.rb
305 def validate_key!(key)
306   raise InvalidKeyName.new(key) unless key && key.size <= 1024
307 end

Public Instance Methods

about() click to toggle source

Interface to information about the current object. Information is read only, though some of its data can be modified through specific methods, such as content_type and content_type=.

 pp some_object.about
   {"last-modified"    => "Sat, 28 Oct 2006 21:29:26 GMT",
    "x-amz-id-2"       =>  "LdcQRk5qLwxJQiZ8OH50HhoyKuqyWoJ67B6i+rOE5MxpjJTWh1kCkL+I0NQzbVQn",
    "content-type"     => "binary/octet-stream",
    "etag"             => "\"dc629038ffc674bee6f62eb68454ff3a\"",
    "date"             => "Sat, 28 Oct 2006 21:30:41 GMT",
    "x-amz-request-id" => "B7BC68F55495B1C8",
    "server"           => "AmazonS3",
    "content-length"   => "3418766"}

some_object.content_type
# => "binary/octet-stream"
some_object.content_type = 'audio/mpeg'
some_object.content_type
# => 'audio/mpeg'
some_object.store
    # File lib/aws/s3/object.rb
512 def about
513   stored? ? self.class.about(key, bucket.name) : About.new
514 end
belongs_to_bucket?() click to toggle source

Returns true if the current object has been assigned to a bucket yet. Objects must belong to a bucket before they can be saved onto S3.

    # File lib/aws/s3/object.rb
441 def belongs_to_bucket?
442   !@bucket.nil?
443 end
Also aliased as: orphan?
bucket() click to toggle source

The current object's bucket. If no bucket has been set, a NoBucketSpecified exception will be raised. For cases where you are not sure if the bucket has been set, you can use the belongs_to_bucket? method.

    # File lib/aws/s3/object.rb
429 def bucket
430   @bucket or raise NoBucketSpecified
431 end
bucket=(bucket) click to toggle source

Sets the bucket that the object belongs to.

    # File lib/aws/s3/object.rb
434 def bucket=(bucket)
435   @bucket = bucket
436   self
437 end
copy(copy_name, options = {}) click to toggle source

Copies the current object, given it the name copy_name. Keep in mind that due to limitations in S3's API, this operation requires retransmitting the entire object to S3.

    # File lib/aws/s3/object.rb
552 def copy(copy_name, options = {})
553   self.class.copy(key, copy_name, bucket.name, options)
554 end
create(options = {})
Alias for: store
delete() click to toggle source

Deletes the current object. Trying to save an object after it has been deleted with raise a DeletedObject exception.

    # File lib/aws/s3/object.rb
544 def delete
545   bucket.update(:deleted, self)
546   freeze
547   self.class.delete(key, bucket.name)
548 end
etag(reload = false) click to toggle source
    # File lib/aws/s3/object.rb
562 def etag(reload = false)
563   return nil unless stored?
564   expirable_memoize(reload) do
565     reload ? about(reload)['etag'][1...-1] : attributes['e_tag'][1...-1]
566   end
567 end
key() click to toggle source

Returns the key of the object. If the key is not set, a NoKeySpecified exception will be raised. For cases where you are not sure if the key has been set, you can use the key_set? method. Objects must have a key set to be saved onto S3. Objects which have already been saved onto S3 will always have their key set.

    # File lib/aws/s3/object.rb
449 def key
450   attributes['key'] or raise NoKeySpecified
451 end
key=(value) click to toggle source

Sets the key for the current object.

    # File lib/aws/s3/object.rb
454 def key=(value)
455   attributes['key'] = value
456 end
key_set?() click to toggle source

Returns true if the current object has had its key set yet. Objects which have already been saved will always return true. This method is useful for objects which have not been saved yet so you know if you need to set the object's key since you can not save an object unless its key has been set.

object.store if object.key_set? && object.belongs_to_bucket?
    # File lib/aws/s3/object.rb
463 def key_set?
464   !attributes['key'].nil?
465 end
metadata() click to toggle source

Interface to viewing and editing metadata for the current object. To be treated like a Hash.

some_object.metadata
# => {}
some_object.metadata[:author] = 'Dave Thomas'
some_object.metadata
# => {"x-amz-meta-author" => "Dave Thomas"}
some_object.metadata[:author]
# => "Dave Thomas"
    # File lib/aws/s3/object.rb
526 def metadata
527   about.metadata
528 end
orphan?()
Alias for: belongs_to_bucket?
owner() click to toggle source

Returns the owner of the current object.

    # File lib/aws/s3/object.rb
570 def owner 
571   Owner.new(attributes['owner'])
572 end
rename(to, options = {}) click to toggle source

Rename the current object. Keep in mind that due to limitations in S3's API, this operation requires retransmitting the entire object to S3.

    # File lib/aws/s3/object.rb
558 def rename(to, options = {})
559   self.class.rename(key, to, bucket.name, options)
560 end
save(options = {})
Alias for: store
store(options = {}) click to toggle source

Saves the current object with the specified options. Valid options are listed in the documentation for S3Object::store.

    # File lib/aws/s3/object.rb
532 def store(options = {})
533   raise DeletedObject if frozen?
534   options  = about.to_headers.merge(options) if stored?
535   response = self.class.store(key, value, bucket.name, options)
536   bucket.update(:stored, self)
537   response.success?
538 end
Also aliased as: create, save
stored?() click to toggle source

Returns true if the current object has been stored on S3 yet.

    # File lib/aws/s3/object.rb
582 def stored?
583   !attributes['e_tag'].nil?
584 end
url(options = {}) click to toggle source

Generates an authenticated url for the current object. Accepts the same options as its class method counter part S3Object.url_for.

    # File lib/aws/s3/object.rb
577 def url(options = {})
578   self.class.url_for(key, bucket.name, options)
579 end
value(options = {}, &block) click to toggle source

Lazily loads object data.

Force a reload of the data by passing :reload.

object.value(:reload)

When loading the data for the first time you can optionally yield to a block which will allow you to stream the data in segments.

object.value do |segment|
  send_data segment
end

The full list of options are listed in the documentation for its class method counter part, S3Object::value.

    # File lib/aws/s3/object.rb
481 def value(options = {}, &block)
482   if options.is_a?(Hash)
483     reload = !options.empty?
484   else
485     reload  = options
486     options = {}
487   end
488   expirable_memoize(reload) do
489     self.class.stream(key, bucket.name, options, &block)
490   end
491 end

Private Instance Methods

proxiable_attribute?(name) click to toggle source
    # File lib/aws/s3/object.rb
603 def proxiable_attribute?(name)
604   valid_header_settings.include?(name)
605 end
valid_header_settings() click to toggle source
    # File lib/aws/s3/object.rb
607 def valid_header_settings
608   %w(cache_control content_type content_length content_md5 content_disposition content_encoding expires)
609 end